Description
stringlengths
18
161k
Code
stringlengths
15
300k
codingutf8 2021 huggingface inc licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license conversion with defaults rescale channel first conversion with rescale and not channel first conversion with no rescale and channel first conversion with no rescale and not channel first by default rescale for an array of ints and channel permute same with no permute force rescale to false force rescale to false and no channel permute now test the default rescale for a float array defaults to false test a single image is converted to a list of 1 image test a batch of images is converted to a list of images test a list of images is not modified test batched masks with no channel dimension are converted to a list of masks test a single image is converted to a list of 1 image test a batch of images is converted to a list of images test a list of images is left unchanged by default rescale for a tensor of ints and channel permute same with no permute force rescale to false force rescale to false and no channel permute now test the default rescale for a float tensor defaults to false on an image topilimage1 is a noop by default no rescale for an array of ints if the array is channelfirst proper reordering of the channels is done if the array has floating type it s rescaled by default you can override the default to rescale and with floats channel first by default no rescale for a tensor of ints if the tensor is channelfirst proper reordering of the channels is done if the tensor has floating type it s rescaled by default you can override the default to rescale and with floats channel first size can be an int or a tuple of ints passing an array converts it to a pil image height width square image rectangular image h w rectangular image h w single integer or single integer in tuplelist size can be an int or a tuple of ints if size is an int smaller edge of the image will be matched to this number i e if height width then image will be rescaled to size height width size passing an array converts it to a pil image size can be an int or a tuple of ints check we get the same results as with numpy arrays pil image are converted to numpy arrays for the normalization during the conversion rescale and channel first will be applied mean and std can be passed as lists or numpy arrays normalize will detect automatically if channel first or channel last is used mean and std can be passed as lists or tensors normalize will detect automatically if channel first or channel last is used test various crop sizes bigger on all dimensions on one of the dimensions only and on both dimensions pil image size is transposed compared to numpy or pytorch width first instead of height first test various crop sizes bigger on all dimensions on one of the dimensions only and on both dimensions check result is consistent with pil image crop test various crop sizes bigger on all dimensions on one of the dimensions only and on both dimensions check result is consistent with pil image crop test we can infer the size and channel dimension of an image test the channel dimension can be overriden test we fail with invalid input test we fail if neither first not last dimension is of size 3 or 1 but if we explicitly set one of the number of channels to 50 it works test we correctly identify the channel dimension we can take a batched array of images and find the dimension test we correctly identify the channel dimension we can take a batched array of images and find the dimension coding utf 8 2021 huggingface inc licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license conversion with defaults rescale channel first conversion with rescale and not channel first conversion with no rescale and channel first conversion with no rescale and not channel first by default rescale for an array of ints and channel permute same with no permute force rescale to false force rescale to false and no channel permute now test the default rescale for a float array defaults to false test a single image is converted to a list of 1 image test a batch of images is converted to a list of images test a list of images is not modified test batched masks with no channel dimension are converted to a list of masks test a single image is converted to a list of 1 image test a batch of images is converted to a list of images test a list of images is left unchanged by default rescale for a tensor of ints and channel permute same with no permute force rescale to false force rescale to false and no channel permute now test the default rescale for a float tensor defaults to false on an image to_pil_image1 is a noop by default no rescale for an array of ints if the array is channel first proper reordering of the channels is done if the array has floating type it s rescaled by default you can override the default to rescale and with floats channel first by default no rescale for a tensor of ints if the tensor is channel first proper reordering of the channels is done if the tensor has floating type it s rescaled by default you can override the default to rescale and with floats channel first size can be an int or a tuple of ints passing an array converts it to a pil image height width square image rectangular image h w rectangular image h w single integer or single integer in tuple list size can be an int or a tuple of ints if size is an int smaller edge of the image will be matched to this number i e if height width then image will be rescaled to size height width size passing an array converts it to a pil image size can be an int or a tuple of ints check we get the same results as with numpy arrays pil image are converted to numpy arrays for the normalization during the conversion rescale and channel first will be applied mean and std can be passed as lists or numpy arrays normalize will detect automatically if channel first or channel last is used mean and std can be passed as lists or tensors normalize will detect automatically if channel first or channel last is used test various crop sizes bigger on all dimensions on one of the dimensions only and on both dimensions pil image size is transposed compared to numpy or pytorch width first instead of height first test various crop sizes bigger on all dimensions on one of the dimensions only and on both dimensions check result is consistent with pil image crop test various crop sizes bigger on all dimensions on one of the dimensions only and on both dimensions check result is consistent with pil image crop img with mode rgba img with mode la img with mode l test we can infer the size and channel dimension of an image test the channel dimension can be overriden test we fail with invalid input test we fail if neither first not last dimension is of size 3 or 1 but if we explicitly set one of the number of channels to 50 it works test we correctly identify the channel dimension we can take a batched array of images and find the dimension test we correctly identify the channel dimension we can take a batched array of images and find the dimension
import os import tempfile import unittest import datasets import numpy as np import pytest from huggingface_hub.file_download import http_get from requests import ConnectTimeout, ReadTimeout from tests.pipelines.test_pipelines_document_question_answering import INVOICE_URL from transformers import is_torch_available, is_vision_available from transformers.image_utils import ChannelDimension, get_channel_dimension_axis, make_list_of_images from transformers.testing_utils import is_flaky, require_torch, require_vision if is_torch_available(): import torch if is_vision_available(): import PIL.Image from transformers import ImageFeatureExtractionMixin from transformers.image_utils import get_image_size, infer_channel_dimension_format, load_image def get_random_image(height, width): random_array = np.random.randint(0, 256, (height, width, 3), dtype=np.uint8) return PIL.Image.fromarray(random_array) @require_vision class ImageFeatureExtractionTester(unittest.TestCase): def test_conversion_image_to_array(self): feature_extractor = ImageFeatureExtractionMixin() image = get_random_image(16, 32) array1 = feature_extractor.to_numpy_array(image) self.assertTrue(array1.dtype, np.float32) self.assertEqual(array1.shape, (3, 16, 32)) array2 = feature_extractor.to_numpy_array(image, channel_first=False) self.assertTrue(array2.dtype, np.float32) self.assertEqual(array2.shape, (16, 32, 3)) self.assertTrue(np.array_equal(array1, array2.transpose(2, 0, 1))) array3 = feature_extractor.to_numpy_array(image, rescale=False) self.assertTrue(array3.dtype, np.uint8) self.assertEqual(array3.shape, (3, 16, 32)) self.assertTrue(np.array_equal(array1, array3.astype(np.float32) * (1 / 255.0))) array4 = feature_extractor.to_numpy_array(image, rescale=False, channel_first=False) self.assertTrue(array4.dtype, np.uint8) self.assertEqual(array4.shape, (16, 32, 3)) self.assertTrue(np.array_equal(array2, array4.astype(np.float32) * (1 / 255.0))) def test_conversion_array_to_array(self): feature_extractor = ImageFeatureExtractionMixin() array = np.random.randint(0, 256, (16, 32, 3), dtype=np.uint8) array1 = feature_extractor.to_numpy_array(array) self.assertTrue(array1.dtype, np.float32) self.assertEqual(array1.shape, (3, 16, 32)) self.assertTrue(np.array_equal(array1, array.transpose(2, 0, 1).astype(np.float32) * (1 / 255.0))) array2 = feature_extractor.to_numpy_array(array, channel_first=False) self.assertTrue(array2.dtype, np.float32) self.assertEqual(array2.shape, (16, 32, 3)) self.assertTrue(np.array_equal(array2, array.astype(np.float32) * (1 / 255.0))) array3 = feature_extractor.to_numpy_array(array, rescale=False) self.assertTrue(array3.dtype, np.uint8) self.assertEqual(array3.shape, (3, 16, 32)) self.assertTrue(np.array_equal(array3, array.transpose(2, 0, 1))) array4 = feature_extractor.to_numpy_array(array, rescale=False, channel_first=False) self.assertTrue(array4.dtype, np.uint8) self.assertEqual(array4.shape, (16, 32, 3)) self.assertTrue(np.array_equal(array4, array)) array5 = feature_extractor.to_numpy_array(array2) self.assertTrue(array5.dtype, np.float32) self.assertEqual(array5.shape, (3, 16, 32)) self.assertTrue(np.array_equal(array5, array1)) def test_make_list_of_images_numpy(self): images = np.random.randint(0, 256, (16, 32, 3)) images_list = make_list_of_images(images) self.assertEqual(len(images_list), 1) self.assertTrue(np.array_equal(images_list[0], images)) self.assertIsInstance(images_list, list) images = np.random.randint(0, 256, (4, 16, 32, 3)) images_list = make_list_of_images(images) self.assertEqual(len(images_list), 4) self.assertTrue(np.array_equal(images_list[0], images[0])) self.assertIsInstance(images_list, list) images = [np.random.randint(0, 256, (16, 32, 3)) for _ in range(4)] images_list = make_list_of_images(images) self.assertEqual(len(images_list), 4) self.assertTrue(np.array_equal(images_list[0], images[0])) self.assertIsInstance(images_list, list) masks = np.random.randint(0, 2, (4, 16, 32)) masks_list = make_list_of_images(masks, expected_ndims=2) self.assertEqual(len(masks_list), 4) self.assertTrue(np.array_equal(masks_list[0], masks[0])) self.assertIsInstance(masks_list, list) @require_torch def test_make_list_of_images_torch(self): images = torch.randint(0, 256, (16, 32, 3)) images_list = make_list_of_images(images) self.assertEqual(len(images_list), 1) self.assertTrue(np.array_equal(images_list[0], images)) self.assertIsInstance(images_list, list) images = torch.randint(0, 256, (4, 16, 32, 3)) images_list = make_list_of_images(images) self.assertEqual(len(images_list), 4) self.assertTrue(np.array_equal(images_list[0], images[0])) self.assertIsInstance(images_list, list) images = [torch.randint(0, 256, (16, 32, 3)) for _ in range(4)] images_list = make_list_of_images(images) self.assertEqual(len(images_list), 4) self.assertTrue(np.array_equal(images_list[0], images[0])) self.assertIsInstance(images_list, list) @require_torch def test_conversion_torch_to_array(self): feature_extractor = ImageFeatureExtractionMixin() tensor = torch.randint(0, 256, (16, 32, 3)) array = tensor.numpy() array1 = feature_extractor.to_numpy_array(array) self.assertTrue(array1.dtype, np.float32) self.assertEqual(array1.shape, (3, 16, 32)) self.assertTrue(np.array_equal(array1, array.transpose(2, 0, 1).astype(np.float32) * (1 / 255.0))) array2 = feature_extractor.to_numpy_array(array, channel_first=False) self.assertTrue(array2.dtype, np.float32) self.assertEqual(array2.shape, (16, 32, 3)) self.assertTrue(np.array_equal(array2, array.astype(np.float32) * (1 / 255.0))) array3 = feature_extractor.to_numpy_array(array, rescale=False) self.assertTrue(array3.dtype, np.uint8) self.assertEqual(array3.shape, (3, 16, 32)) self.assertTrue(np.array_equal(array3, array.transpose(2, 0, 1))) array4 = feature_extractor.to_numpy_array(array, rescale=False, channel_first=False) self.assertTrue(array4.dtype, np.uint8) self.assertEqual(array4.shape, (16, 32, 3)) self.assertTrue(np.array_equal(array4, array)) array5 = feature_extractor.to_numpy_array(array2) self.assertTrue(array5.dtype, np.float32) self.assertEqual(array5.shape, (3, 16, 32)) self.assertTrue(np.array_equal(array5, array1)) def test_conversion_image_to_image(self): feature_extractor = ImageFeatureExtractionMixin() image = get_random_image(16, 32) image1 = feature_extractor.to_pil_image(image) self.assertTrue(isinstance(image, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image), np.array(image1))) def test_conversion_array_to_image(self): feature_extractor = ImageFeatureExtractionMixin() array = np.random.randint(0, 256, (16, 32, 3), dtype=np.uint8) image1 = feature_extractor.to_pil_image(array) self.assertTrue(isinstance(image1, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image1), array)) image2 = feature_extractor.to_pil_image(array.transpose(2, 0, 1)) self.assertTrue(isinstance(image2, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image2), array)) image3 = feature_extractor.to_pil_image(array.astype(np.float32) * (1 / 255.0)) self.assertTrue(isinstance(image3, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image3), array)) image4 = feature_extractor.to_pil_image(array.astype(np.float32), rescale=False) self.assertTrue(isinstance(image4, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image4), array)) image5 = feature_extractor.to_pil_image(array.transpose(2, 0, 1).astype(np.float32) * (1 / 255.0)) self.assertTrue(isinstance(image5, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image5), array)) @require_torch def test_conversion_tensor_to_image(self): feature_extractor = ImageFeatureExtractionMixin() tensor = torch.randint(0, 256, (16, 32, 3)) array = tensor.numpy() image1 = feature_extractor.to_pil_image(tensor) self.assertTrue(isinstance(image1, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image1), array)) image2 = feature_extractor.to_pil_image(tensor.permute(2, 0, 1)) self.assertTrue(isinstance(image2, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image2), array)) image3 = feature_extractor.to_pil_image(tensor.float() / 255.0) self.assertTrue(isinstance(image3, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image3), array)) image4 = feature_extractor.to_pil_image(tensor.float(), rescale=False) self.assertTrue(isinstance(image4, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image4), array)) image5 = feature_extractor.to_pil_image(tensor.permute(2, 0, 1).float() * (1 / 255.0)) self.assertTrue(isinstance(image5, PIL.Image.Image)) self.assertTrue(np.array_equal(np.array(image5), array)) def test_resize_image_and_array(self): feature_extractor = ImageFeatureExtractionMixin() image = get_random_image(16, 32) array = np.array(image) resized_image = feature_extractor.resize(image, 8) self.assertTrue(isinstance(resized_image, PIL.Image.Image)) self.assertEqual(resized_image.size, (8, 8)) resized_image1 = feature_extractor.resize(image, (8, 16)) self.assertTrue(isinstance(resized_image1, PIL.Image.Image)) self.assertEqual(resized_image1.size, (8, 16)) resized_image2 = feature_extractor.resize(array, 8) self.assertTrue(isinstance(resized_image2, PIL.Image.Image)) self.assertEqual(resized_image2.size, (8, 8)) self.assertTrue(np.array_equal(np.array(resized_image), np.array(resized_image2))) resized_image3 = feature_extractor.resize(image, (8, 16)) self.assertTrue(isinstance(resized_image3, PIL.Image.Image)) self.assertEqual(resized_image3.size, (8, 16)) self.assertTrue(np.array_equal(np.array(resized_image1), np.array(resized_image3))) def test_resize_image_and_array_non_default_to_square(self): feature_extractor = ImageFeatureExtractionMixin() heights_widths = [ (28, 28), (27, 27), (28, 34), (29, 35), (34, 28), (35, 29), ] sizes = [22, 27, 28, 36, [22], (27,)] for (height, width), size in zip(heights_widths, sizes): for max_size in (None, 37, 1000): image = get_random_image(height, width) array = np.array(image) size = size[0] if isinstance(size, (list, tuple)) else size if height < width: exp_w, exp_h = (int(size * width / height), size) if max_size is not None and max_size < exp_w: exp_w, exp_h = max_size, int(max_size * exp_h / exp_w) elif width < height: exp_w, exp_h = (size, int(size * height / width)) if max_size is not None and max_size < exp_h: exp_w, exp_h = int(max_size * exp_w / exp_h), max_size else: exp_w, exp_h = (size, size) if max_size is not None and max_size < size: exp_w, exp_h = max_size, max_size resized_image = feature_extractor.resize(image, size=size, default_to_square=False, max_size=max_size) self.assertTrue(isinstance(resized_image, PIL.Image.Image)) self.assertEqual(resized_image.size, (exp_w, exp_h)) resized_image2 = feature_extractor.resize(array, size=size, default_to_square=False, max_size=max_size) self.assertTrue(isinstance(resized_image2, PIL.Image.Image)) self.assertEqual(resized_image2.size, (exp_w, exp_h)) self.assertTrue(np.array_equal(np.array(resized_image), np.array(resized_image2))) @require_torch def test_resize_tensor(self): feature_extractor = ImageFeatureExtractionMixin() tensor = torch.randint(0, 256, (16, 32, 3)) array = tensor.numpy() resized_image = feature_extractor.resize(tensor, 8) self.assertTrue(isinstance(resized_image, PIL.Image.Image)) self.assertEqual(resized_image.size, (8, 8)) resized_image1 = feature_extractor.resize(tensor, (8, 16)) self.assertTrue(isinstance(resized_image1, PIL.Image.Image)) self.assertEqual(resized_image1.size, (8, 16)) resized_image2 = feature_extractor.resize(array, 8) self.assertTrue(np.array_equal(np.array(resized_image), np.array(resized_image2))) resized_image3 = feature_extractor.resize(array, (8, 16)) self.assertTrue(np.array_equal(np.array(resized_image1), np.array(resized_image3))) def test_normalize_image(self): feature_extractor = ImageFeatureExtractionMixin() image = get_random_image(16, 32) array = np.array(image) mean = [0.1, 0.5, 0.9] std = [0.2, 0.4, 0.6] normalized_image = feature_extractor.normalize(image, mean, std) self.assertTrue(isinstance(normalized_image, np.ndarray)) self.assertEqual(normalized_image.shape, (3, 16, 32)) expected = array.transpose(2, 0, 1).astype(np.float32) * (1 / 255.0) np_mean = np.array(mean).astype(np.float32)[:, None, None] np_std = np.array(std).astype(np.float32)[:, None, None] expected = (expected - np_mean) / np_std self.assertTrue(np.array_equal(normalized_image, expected)) def test_normalize_array(self): feature_extractor = ImageFeatureExtractionMixin() array = np.random.random((16, 32, 3)) mean = [0.1, 0.5, 0.9] std = [0.2, 0.4, 0.6] expected = (array - np.array(mean)) / np.array(std) normalized_array = feature_extractor.normalize(array, mean, std) self.assertTrue(np.array_equal(normalized_array, expected)) normalized_array = feature_extractor.normalize(array, np.array(mean), np.array(std)) self.assertTrue(np.array_equal(normalized_array, expected)) array = np.random.random((3, 16, 32)) expected = (array - np.array(mean)[:, None, None]) / np.array(std)[:, None, None] normalized_array = feature_extractor.normalize(array, mean, std) self.assertTrue(np.array_equal(normalized_array, expected)) normalized_array = feature_extractor.normalize(array, np.array(mean), np.array(std)) self.assertTrue(np.array_equal(normalized_array, expected)) @require_torch def test_normalize_tensor(self): feature_extractor = ImageFeatureExtractionMixin() tensor = torch.rand(16, 32, 3) mean = [0.1, 0.5, 0.9] std = [0.2, 0.4, 0.6] expected = (tensor - torch.tensor(mean)) / torch.tensor(std) normalized_tensor = feature_extractor.normalize(tensor, mean, std) self.assertTrue(torch.equal(normalized_tensor, expected)) normalized_tensor = feature_extractor.normalize(tensor, torch.tensor(mean), torch.tensor(std)) self.assertTrue(torch.equal(normalized_tensor, expected)) tensor = torch.rand(3, 16, 32) expected = (tensor - torch.tensor(mean)[:, None, None]) / torch.tensor(std)[:, None, None] normalized_tensor = feature_extractor.normalize(tensor, mean, std) self.assertTrue(torch.equal(normalized_tensor, expected)) normalized_tensor = feature_extractor.normalize(tensor, torch.tensor(mean), torch.tensor(std)) self.assertTrue(torch.equal(normalized_tensor, expected)) def test_center_crop_image(self): feature_extractor = ImageFeatureExtractionMixin() image = get_random_image(16, 32) crop_sizes = [8, (8, 64), 20, (32, 64)] for size in crop_sizes: cropped_image = feature_extractor.center_crop(image, size) self.assertTrue(isinstance(cropped_image, PIL.Image.Image)) expected_size = (size, size) if isinstance(size, int) else (size[1], size[0]) self.assertEqual(cropped_image.size, expected_size) def test_center_crop_array(self): feature_extractor = ImageFeatureExtractionMixin() image = get_random_image(16, 32) array = feature_extractor.to_numpy_array(image) crop_sizes = [8, (8, 64), 20, (32, 64)] for size in crop_sizes: cropped_array = feature_extractor.center_crop(array, size) self.assertTrue(isinstance(cropped_array, np.ndarray)) expected_size = (size, size) if isinstance(size, int) else size self.assertEqual(cropped_array.shape[-2:], expected_size) cropped_image = feature_extractor.center_crop(image, size) self.assertTrue(np.array_equal(cropped_array, feature_extractor.to_numpy_array(cropped_image))) @require_torch def test_center_crop_tensor(self): feature_extractor = ImageFeatureExtractionMixin() image = get_random_image(16, 32) array = feature_extractor.to_numpy_array(image) tensor = torch.tensor(array) crop_sizes = [8, (8, 64), 20, (32, 64)] for size in crop_sizes: cropped_tensor = feature_extractor.center_crop(tensor, size) self.assertTrue(isinstance(cropped_tensor, torch.Tensor)) expected_size = (size, size) if isinstance(size, int) else size self.assertEqual(cropped_tensor.shape[-2:], expected_size) cropped_image = feature_extractor.center_crop(image, size) self.assertTrue(torch.equal(cropped_tensor, torch.tensor(feature_extractor.to_numpy_array(cropped_image)))) @require_vision class LoadImageTester(unittest.TestCase): def test_load_img_url(self): img = load_image(INVOICE_URL) img_arr = np.array(img) self.assertEqual(img_arr.shape, (1061, 750, 3)) @is_flaky() def test_load_img_url_timeout(self): with self.assertRaises((ReadTimeout, ConnectTimeout)): load_image(INVOICE_URL, timeout=0.001) def test_load_img_local(self): img = load_image("./tests/fixtures/tests_samples/COCO/000000039769.png") img_arr = np.array(img) self.assertEqual( img_arr.shape, (480, 640, 3), ) def test_load_img_base64_prefix(self): try: tmp_file = tempfile.mktemp() with open(tmp_file, "wb") as f: http_get( "https://huggingface.co/datasets/hf-internal-testing/dummy-base64-images/raw/main/image_0.txt", f ) with open(tmp_file, encoding="utf-8") as b64: img = load_image(b64.read()) img_arr = np.array(img) finally: os.remove(tmp_file) self.assertEqual(img_arr.shape, (64, 32, 3)) def test_load_img_base64(self): try: tmp_file = tempfile.mktemp() with open(tmp_file, "wb") as f: http_get( "https://huggingface.co/datasets/hf-internal-testing/dummy-base64-images/raw/main/image_1.txt", f ) with open(tmp_file, encoding="utf-8") as b64: img = load_image(b64.read()) img_arr = np.array(img) finally: os.remove(tmp_file) self.assertEqual(img_arr.shape, (64, 32, 3)) def test_load_img_rgba(self): dataset = datasets.load_dataset("hf-internal-testing/fixtures_image_utils", "image", split="test") img = load_image(dataset[0]["file"]) img_arr = np.array(img) self.assertEqual( img_arr.shape, (512, 512, 3), ) def test_load_img_la(self): dataset = datasets.load_dataset("hf-internal-testing/fixtures_image_utils", "image", split="test") img = load_image(dataset[1]["file"]) img_arr = np.array(img) self.assertEqual( img_arr.shape, (512, 768, 3), ) def test_load_img_l(self): dataset = datasets.load_dataset("hf-internal-testing/fixtures_image_utils", "image", split="test") img = load_image(dataset[2]["file"]) img_arr = np.array(img) self.assertEqual( img_arr.shape, (381, 225, 3), ) def test_load_img_exif_transpose(self): dataset = datasets.load_dataset("hf-internal-testing/fixtures_image_utils", "image", split="test") img_file = dataset[3]["file"] img_without_exif_transpose = PIL.Image.open(img_file) img_arr_without_exif_transpose = np.array(img_without_exif_transpose) self.assertEqual( img_arr_without_exif_transpose.shape, (333, 500, 3), ) img_with_exif_transpose = load_image(img_file) img_arr_with_exif_transpose = np.array(img_with_exif_transpose) self.assertEqual( img_arr_with_exif_transpose.shape, (500, 333, 3), ) class UtilFunctionTester(unittest.TestCase): def test_get_image_size(self): image = np.random.randint(0, 256, (32, 64, 3)) self.assertEqual(get_image_size(image), (32, 64)) image = np.random.randint(0, 256, (3, 32, 64)) self.assertEqual(get_image_size(image), (32, 64)) image = np.random.randint(0, 256, (3, 32, 64)) self.assertEqual(get_image_size(image, channel_dim=ChannelDimension.LAST), (3, 32)) def test_infer_channel_dimension(self): with pytest.raises(ValueError): infer_channel_dimension_format(np.random.randint(0, 256, (10, 10))) with pytest.raises(ValueError): infer_channel_dimension_format(np.random.randint(0, 256, (10, 10, 10, 10, 10))) with pytest.raises(ValueError): infer_channel_dimension_format(np.random.randint(0, 256, (10, 1, 50))) inferred_dim = infer_channel_dimension_format(np.random.randint(0, 256, (10, 1, 50)), num_channels=50) self.assertEqual(inferred_dim, ChannelDimension.LAST) image = np.random.randint(0, 256, (3, 4, 5)) inferred_dim = infer_channel_dimension_format(image) self.assertEqual(inferred_dim, ChannelDimension.FIRST) image = np.random.randint(0, 256, (1, 4, 5)) inferred_dim = infer_channel_dimension_format(image) self.assertEqual(inferred_dim, ChannelDimension.FIRST) image = np.random.randint(0, 256, (4, 5, 3)) inferred_dim = infer_channel_dimension_format(image) self.assertEqual(inferred_dim, ChannelDimension.LAST) image = np.random.randint(0, 256, (4, 5, 1)) inferred_dim = infer_channel_dimension_format(image) self.assertEqual(inferred_dim, ChannelDimension.LAST) image = np.random.randint(0, 256, (1, 3, 4, 5)) inferred_dim = infer_channel_dimension_format(image) self.assertEqual(inferred_dim, ChannelDimension.FIRST) def test_get_channel_dimension_axis(self): image = np.random.randint(0, 256, (3, 4, 5)) inferred_axis = get_channel_dimension_axis(image) self.assertEqual(inferred_axis, 0) image = np.random.randint(0, 256, (1, 4, 5)) inferred_axis = get_channel_dimension_axis(image) self.assertEqual(inferred_axis, 0) image = np.random.randint(0, 256, (4, 5, 3)) inferred_axis = get_channel_dimension_axis(image) self.assertEqual(inferred_axis, 2) image = np.random.randint(0, 256, (4, 5, 1)) inferred_axis = get_channel_dimension_axis(image) self.assertEqual(inferred_axis, 2) image = np.random.randint(0, 256, (1, 3, 4, 5)) inferred_axis = get_channel_dimension_axis(image) self.assertEqual(inferred_axis, 1)
2020 the huggingface team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license the current default level is logging warning restore to the original level should be able to log warnings if default settings weren t overridden by pytest loglevelall this is setting the level for all of transformers loggers should not be able to log warnings should be able to log warnings again restore to the original level reset for the env var to take effect next time some logger call is made this action activates the env var restore to the original level reset for the env var to take effect next time some logger call is made this action activates the env var no need to restore as nothing was changed testing logger warningadvice nothing should be logged as env var disables this method should log normally as transformersnoadvisorywarnings is unset 2020 the huggingface team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license the current default level is logging warning restore to the original level should be able to log warnings if default settings weren t overridden by pytest log level all this is setting the level for all of transformers loggers should not be able to log warnings should be able to log warnings again restore to the original level reset for the env var to take effect next time some logger call is made this action activates the env var restore to the original level reset for the env var to take effect next time some logger call is made this action activates the env var no need to restore as nothing was changed testing logger warning_advice nothing should be logged as env var disables this method should log normally as transformers_no_advisory_warnings is unset
import os import unittest from huggingface_hub.utils import are_progress_bars_disabled import transformers.models.bart.tokenization_bart from transformers import logging from transformers.testing_utils import CaptureLogger, mockenv, mockenv_context from transformers.utils.logging import disable_progress_bar, enable_progress_bar class HfArgumentParserTest(unittest.TestCase): def test_set_level(self): logger = logging.get_logger() level_origin = logging.get_verbosity() logging.set_verbosity_error() self.assertEqual(logger.getEffectiveLevel(), logging.get_verbosity()) logging.set_verbosity_warning() self.assertEqual(logger.getEffectiveLevel(), logging.get_verbosity()) logging.set_verbosity_info() self.assertEqual(logger.getEffectiveLevel(), logging.get_verbosity()) logging.set_verbosity_debug() self.assertEqual(logger.getEffectiveLevel(), logging.get_verbosity()) logging.set_verbosity(level_origin) def test_integration(self): level_origin = logging.get_verbosity() logger = logging.get_logger("transformers.models.bart.tokenization_bart") msg = "Testing 1, 2, 3" if level_origin <= logging.WARNING: with CaptureLogger(logger) as cl: logger.warning(msg) self.assertEqual(cl.out, msg + "\n") logging.set_verbosity_error() with CaptureLogger(logger) as cl: logger.warning(msg) self.assertEqual(cl.out, "") logging.set_verbosity_warning() with CaptureLogger(logger) as cl: logger.warning(msg) self.assertEqual(cl.out, msg + "\n") logging.set_verbosity(level_origin) @mockenv(TRANSFORMERS_VERBOSITY="error") def test_env_override(self): transformers.utils.logging._reset_library_root_logger() _ = logging.get_logger("transformers.models.bart.tokenization_bart") env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None) env_level = logging.log_levels[env_level_str] current_level = logging.get_verbosity() self.assertEqual( env_level, current_level, f"TRANSFORMERS_VERBOSITY={env_level_str}/{env_level}, but internal verbosity is {current_level}", ) os.environ["TRANSFORMERS_VERBOSITY"] = "" transformers.utils.logging._reset_library_root_logger() @mockenv(TRANSFORMERS_VERBOSITY="super-error") def test_env_invalid_override(self): transformers.utils.logging._reset_library_root_logger() logger = logging.logging.getLogger() with CaptureLogger(logger) as cl: logging.get_logger("transformers.models.bart.tokenization_bart") self.assertIn("Unknown option TRANSFORMERS_VERBOSITY=super-error", cl.out) def test_advisory_warnings(self): transformers.utils.logging._reset_library_root_logger() logger = logging.get_logger("transformers.models.bart.tokenization_bart") msg = "Testing 1, 2, 3" with mockenv_context(TRANSFORMERS_NO_ADVISORY_WARNINGS="1"): with CaptureLogger(logger) as cl: logger.warning_advice(msg) self.assertEqual(cl.out, "") with mockenv_context(TRANSFORMERS_NO_ADVISORY_WARNINGS=""): with CaptureLogger(logger) as cl: logger.warning_advice(msg) self.assertEqual(cl.out, msg + "\n") def test_set_progress_bar_enabled(): disable_progress_bar() assert are_progress_bars_disabled() enable_progress_bar() assert not are_progress_bars_disabled()
codingutf8 2019 huggingface inc licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license coding utf 8 2019 huggingface inc licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license
import json import os import tempfile import unittest from transformers.modelcard import ModelCard class ModelCardTester(unittest.TestCase): def setUp(self): self.inputs_dict = { "model_details": { "Organization": "testing", "Model date": "today", "Model version": "v2.1, Developed by Test Corp in 2019.", "Architecture": "Convolutional Neural Network.", }, "metrics": "BLEU and ROUGE-1", "evaluation_data": { "Datasets": {"BLEU": "My-great-dataset-v1", "ROUGE-1": "My-short-dataset-v2.1"}, "Preprocessing": "See details on https://arxiv.org/pdf/1810.03993.pdf", }, "training_data": { "Dataset": "English Wikipedia dump dated 2018-12-01", "Preprocessing": ( "Using SentencePiece vocabulary of size 52k tokens. See details on" " https://arxiv.org/pdf/1810.03993.pdf" ), }, "quantitative_analyses": {"BLEU": 55.1, "ROUGE-1": 76}, } def test_model_card_common_properties(self): modelcard = ModelCard.from_dict(self.inputs_dict) self.assertTrue(hasattr(modelcard, "model_details")) self.assertTrue(hasattr(modelcard, "intended_use")) self.assertTrue(hasattr(modelcard, "factors")) self.assertTrue(hasattr(modelcard, "metrics")) self.assertTrue(hasattr(modelcard, "evaluation_data")) self.assertTrue(hasattr(modelcard, "training_data")) self.assertTrue(hasattr(modelcard, "quantitative_analyses")) self.assertTrue(hasattr(modelcard, "ethical_considerations")) self.assertTrue(hasattr(modelcard, "caveats_and_recommendations")) def test_model_card_to_json_string(self): modelcard = ModelCard.from_dict(self.inputs_dict) obj = json.loads(modelcard.to_json_string()) for key, value in self.inputs_dict.items(): self.assertEqual(obj[key], value) def test_model_card_to_json_file(self): model_card_first = ModelCard.from_dict(self.inputs_dict) with tempfile.TemporaryDirectory() as tmpdirname: filename = os.path.join(tmpdirname, "modelcard.json") model_card_first.to_json_file(filename) model_card_second = ModelCard.from_json_file(filename) self.assertEqual(model_card_second.to_dict(), model_card_first.to_dict()) def test_model_card_from_and_save_pretrained(self): model_card_first = ModelCard.from_dict(self.inputs_dict) with tempfile.TemporaryDirectory() as tmpdirname: model_card_first.save_pretrained(tmpdirname) model_card_second = ModelCard.from_pretrained(tmpdirname) self.assertEqual(model_card_second.to_dict(), model_card_first.to_dict())
codingutf8 2020 the hugging face team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license ensure torch utils pytree treats modeloutput subclasses as nodes and not leaves this is important for distributeddataparallel gradient synchronization with staticgraphtrue invalid test subclass of modeloutput where dataclass decorator is not used a float b optionalfloat none c optionalfloat none class modeloutputsubclasstesterunittest testcase def testdirectmodeloutputself check that direct usage of modeloutput instantiates without errors modeloutputa 1 1 def testsubclassnodataclassself check that a subclass of modeloutput without dataclass is invalid a valid subclass is inherently tested other unit tests above with self assertraisestypeerror modeloutputtestnodataclassa1 1 b2 2 c3 3 coding utf 8 2020 the hugging face team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license ensure torch utils _pytree treats modeloutput subclasses as nodes and not leaves this is important for distributeddataparallel gradient synchronization with static_graph true invalid test subclass of modeloutput where dataclass decorator is not used check that direct usage of modeloutput instantiates without errors check that a subclass of modeloutput without dataclass is invalid a valid subclass is inherently tested other unit tests above
import unittest from dataclasses import dataclass from typing import Optional from transformers.testing_utils import require_torch from transformers.utils import ModelOutput @dataclass class ModelOutputTest(ModelOutput): a: float b: Optional[float] = None c: Optional[float] = None class ModelOutputTester(unittest.TestCase): def test_get_attributes(self): x = ModelOutputTest(a=30) self.assertEqual(x.a, 30) self.assertIsNone(x.b) self.assertIsNone(x.c) with self.assertRaises(AttributeError): _ = x.d def test_index_with_ints_and_slices(self): x = ModelOutputTest(a=30, b=10) self.assertEqual(x[0], 30) self.assertEqual(x[1], 10) self.assertEqual(x[:2], (30, 10)) self.assertEqual(x[:], (30, 10)) x = ModelOutputTest(a=30, c=10) self.assertEqual(x[0], 30) self.assertEqual(x[1], 10) self.assertEqual(x[:2], (30, 10)) self.assertEqual(x[:], (30, 10)) def test_index_with_strings(self): x = ModelOutputTest(a=30, b=10) self.assertEqual(x["a"], 30) self.assertEqual(x["b"], 10) with self.assertRaises(KeyError): _ = x["c"] x = ModelOutputTest(a=30, c=10) self.assertEqual(x["a"], 30) self.assertEqual(x["c"], 10) with self.assertRaises(KeyError): _ = x["b"] def test_dict_like_properties(self): x = ModelOutputTest(a=30) self.assertEqual(list(x.keys()), ["a"]) self.assertEqual(list(x.values()), [30]) self.assertEqual(list(x.items()), [("a", 30)]) self.assertEqual(list(x), ["a"]) x = ModelOutputTest(a=30, b=10) self.assertEqual(list(x.keys()), ["a", "b"]) self.assertEqual(list(x.values()), [30, 10]) self.assertEqual(list(x.items()), [("a", 30), ("b", 10)]) self.assertEqual(list(x), ["a", "b"]) x = ModelOutputTest(a=30, c=10) self.assertEqual(list(x.keys()), ["a", "c"]) self.assertEqual(list(x.values()), [30, 10]) self.assertEqual(list(x.items()), [("a", 30), ("c", 10)]) self.assertEqual(list(x), ["a", "c"]) with self.assertRaises(Exception): x = x.update({"d": 20}) with self.assertRaises(Exception): del x["a"] with self.assertRaises(Exception): _ = x.pop("a") with self.assertRaises(Exception): _ = x.setdefault("d", 32) def test_set_attributes(self): x = ModelOutputTest(a=30) x.a = 10 self.assertEqual(x.a, 10) self.assertEqual(x["a"], 10) def test_set_keys(self): x = ModelOutputTest(a=30) x["a"] = 10 self.assertEqual(x.a, 10) self.assertEqual(x["a"], 10) def test_instantiate_from_dict(self): x = ModelOutputTest({"a": 30, "b": 10}) self.assertEqual(list(x.keys()), ["a", "b"]) self.assertEqual(x.a, 30) self.assertEqual(x.b, 10) def test_instantiate_from_iterator(self): x = ModelOutputTest([("a", 30), ("b", 10)]) self.assertEqual(list(x.keys()), ["a", "b"]) self.assertEqual(x.a, 30) self.assertEqual(x.b, 10) with self.assertRaises(ValueError): _ = ModelOutputTest([("a", 30), (10, 10)]) x = ModelOutputTest(a=(30, 30)) self.assertEqual(list(x.keys()), ["a"]) self.assertEqual(x.a, (30, 30)) @require_torch def test_torch_pytree(self): import torch.utils._pytree as pytree x = ModelOutput({"a": 1.0, "c": 2.0}) self.assertFalse(pytree._is_leaf(x)) x = ModelOutputTest(a=1.0, c=2.0) self.assertFalse(pytree._is_leaf(x)) expected_flat_outs = [1.0, 2.0] expected_tree_spec = pytree.TreeSpec( ModelOutputTest, (ModelOutputTest, ["a", "c"]), [pytree.LeafSpec(), pytree.LeafSpec()] ) actual_flat_outs, actual_tree_spec = pytree.tree_flatten(x) self.assertEqual(expected_flat_outs, actual_flat_outs) self.assertEqual(expected_tree_spec, actual_tree_spec) unflattened_x = pytree.tree_unflatten(actual_flat_outs, actual_tree_spec) self.assertEqual(x, unflattened_x) class ModelOutputTestNoDataclass(ModelOutput): a: float b: Optional[float] = None c: Optional[float] = None class ModelOutputSubclassTester(unittest.TestCase): def test_direct_model_output(self): ModelOutput({"a": 1.1}) def test_subclass_no_dataclass(self): with self.assertRaises(TypeError): ModelOutputTestNoDataclass(a=1.1, b=2.2, c=3.3)
codingutf8 2019 huggingface inc licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license restrict tensorflow to only allocate x gb of memory on the gpus virtual devices must be set before gpus have been initialized this is a copy of the testkerasfit method but we use xla compilation instead of eager test that model correctly compute the loss with kwargs is there a better way to remove these decoder inputs make sure it works with xla make sure the model fits without crashing regardless of where we pass the labels now test it with separate labels to make sure that path works in xla too remove keys not in the serving signature as the savedmodel will not be compiled to deal with them check it s a tensor in case the inputs dict has some bools in it too tryfinally block to ensure subsequent tests run in float32 headmask and decoderheadmask has different shapes than other input args t5mainlayer needs an embedtokens parameter when called without the inputsembeds parameter take the same values than in tft5modeltester for this shared layer special tokens cannot be bad tokens create random bad tokens that are not special tokens for all bad word tokens for all slices in batch for all word idx if tokens match coding utf 8 2019 huggingface inc licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license restrict tensorflow to only allocate x gb of memory on the gpus virtual devices must be set before gpus have been initialized this is a copy of the test_keras_fit method but we use xla compilation instead of eager test that model correctly compute the loss with kwargs is there a better way to remove these decoder inputs make sure it works with xla make sure the model fits without crashing regardless of where we pass the labels now test it with separate labels to make sure that path works in xla too remove keys not in the serving signature as the savedmodel will not be compiled to deal with them check it s a tensor in case the inputs dict has some bools in it too some models have inputs that the preparation functions don t create we skip those try finally block to ensure subsequent tests run in float32 head_mask and decoder_head_mask has different shapes than other input args t5mainlayer needs an embed_tokens parameter when called without the inputs_embeds parameter take the same values than in tft5modeltester for this shared layer special tokens cannot be bad tokens create random bad tokens that are not special tokens for all bad word tokens for all slices in batch for all word idx if tokens match
from __future__ import annotations import copy import os import tempfile from importlib import import_module from math import isnan from transformers import is_tf_available from transformers.models.auto import get_values from transformers.testing_utils import _tf_gpu_memory_limit, require_tf, slow from ..test_modeling_tf_common import ids_tensor if is_tf_available(): import numpy as np import tensorflow as tf from transformers import ( TF_MODEL_FOR_CAUSAL_LM_MAPPING, TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING, TF_MODEL_FOR_MASKED_LM_MAPPING, TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING, TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING, TF_MODEL_FOR_PRETRAINING_MAPPING, TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING, TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, TFSharedEmbeddings, ) if _tf_gpu_memory_limit is not None: gpus = tf.config.list_physical_devices("GPU") for gpu in gpus: try: tf.config.set_logical_device_configuration( gpu, [tf.config.LogicalDeviceConfiguration(memory_limit=_tf_gpu_memory_limit)] ) logical_gpus = tf.config.list_logical_devices("GPU") print("Logical GPUs", logical_gpus) except RuntimeError as e: print(e) @require_tf class TFCoreModelTesterMixin: model_tester = None all_model_classes = () all_generative_model_classes = () test_mismatched_shapes = True test_resize_embeddings = True test_head_masking = True is_encoder_decoder = False def _prepare_for_class(self, inputs_dict, model_class, return_labels=False) -> dict: inputs_dict = copy.deepcopy(inputs_dict) if model_class in get_values(TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING): inputs_dict = { k: tf.tile(tf.expand_dims(v, 1), (1, self.model_tester.num_choices) + (1,) * (v.ndim - 1)) if isinstance(v, tf.Tensor) and v.ndim > 0 else v for k, v in inputs_dict.items() } if return_labels: if model_class in get_values(TF_MODEL_FOR_MULTIPLE_CHOICE_MAPPING): inputs_dict["labels"] = tf.ones(self.model_tester.batch_size, dtype=tf.int32) elif model_class in get_values(TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING): inputs_dict["start_positions"] = tf.zeros(self.model_tester.batch_size, dtype=tf.int32) inputs_dict["end_positions"] = tf.zeros(self.model_tester.batch_size, dtype=tf.int32) elif model_class in [ *get_values(TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING), *get_values(TF_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING), ]: inputs_dict["labels"] = tf.zeros(self.model_tester.batch_size, dtype=tf.int32) elif model_class in get_values(TF_MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING): inputs_dict["next_sentence_label"] = tf.zeros(self.model_tester.batch_size, dtype=tf.int32) elif model_class in [ *get_values(TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING), *get_values(TF_MODEL_FOR_CAUSAL_LM_MAPPING), *get_values(TF_MODEL_FOR_MASKED_LM_MAPPING), *get_values(TF_MODEL_FOR_PRETRAINING_MAPPING), *get_values(TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING), ]: inputs_dict["labels"] = tf.zeros( (self.model_tester.batch_size, self.model_tester.seq_length), dtype=tf.int32 ) return inputs_dict @slow def test_graph_mode(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes[:2]: inputs = self._prepare_for_class(inputs_dict, model_class) model = model_class(config) @tf.function def run_in_graph_mode(): return model(inputs) outputs = run_in_graph_mode() self.assertIsNotNone(outputs) @slow def test_xla_mode(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes[:2]: inputs = self._prepare_for_class(inputs_dict, model_class) model = model_class(config) @tf.function(experimental_compile=True) def run_in_graph_mode(): return model(inputs) outputs = run_in_graph_mode() self.assertIsNotNone(outputs) @slow def test_xla_fit(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes[:2]: model = model_class(config) if getattr(model, "hf_compute_loss", None): prepared_for_class = self._prepare_for_class(inputs_dict.copy(), model_class, return_labels=True) prepared_for_class = { key: val for key, val in prepared_for_class.items() if key not in ("head_mask", "decoder_head_mask", "cross_attn_head_mask", "decoder_input_ids") } possible_label_cols = { "labels", "label", "label_ids", "start_positions", "start_position", "end_positions", "end_position", "next_sentence_label", } label_names = possible_label_cols.intersection(set(prepared_for_class)) self.assertGreater(len(label_names), 0, msg="No matching label names found!") labels = {key: val for key, val in prepared_for_class.items() if key in label_names} inputs_minus_labels = {key: val for key, val in prepared_for_class.items() if key not in label_names} self.assertGreater(len(inputs_minus_labels), 0) model.compile(optimizer=tf.keras.optimizers.SGD(0.0), jit_compile=True) history = model.fit( prepared_for_class, validation_data=prepared_for_class, steps_per_epoch=1, validation_steps=1, shuffle=False, verbose=0, ) loss = history.history["loss"][0] self.assertTrue(not isnan(loss)) val_loss = history.history["val_loss"][0] self.assertTrue(not isnan(val_loss)) model = model_class(config) model.compile(optimizer=tf.keras.optimizers.SGD(0.0), jit_compile=True) history = model.fit( inputs_minus_labels, labels, validation_data=(inputs_minus_labels, labels), steps_per_epoch=1, validation_steps=1, shuffle=False, verbose=0, ) loss = history.history["loss"][0] self.assertTrue(not isnan(loss)) val_loss = history.history["val_loss"][0] self.assertTrue(not isnan(val_loss)) @slow def test_saved_model_creation_extended(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.output_hidden_states = True config.output_attentions = True if hasattr(config, "use_cache"): config.use_cache = True encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", self.model_tester.seq_length) encoder_key_length = getattr(self.model_tester, "key_length", encoder_seq_length) for model_class in self.all_model_classes[:2]: class_inputs_dict = self._prepare_for_class(inputs_dict, model_class) model = model_class(config) model.build() num_out = len(model(class_inputs_dict)) for key in list(class_inputs_dict.keys()): if key not in model.input_signature: del class_inputs_dict[key] elif isinstance(class_inputs_dict[key], tf.Tensor) and class_inputs_dict[key].dtype.is_integer: class_inputs_dict[key] = tf.cast(class_inputs_dict[key], tf.int32) if set(class_inputs_dict.keys()) != set(model.input_signature.keys()): continue with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname, saved_model=True) saved_model_dir = os.path.join(tmpdirname, "saved_model", "1") model = tf.keras.models.load_model(saved_model_dir) outputs = model(class_inputs_dict) if self.is_encoder_decoder: output_hidden_states = outputs["encoder_hidden_states"] output_attentions = outputs["encoder_attentions"] else: output_hidden_states = outputs["hidden_states"] output_attentions = outputs["attentions"] self.assertEqual(len(outputs), num_out) expected_num_layers = getattr( self.model_tester, "expected_num_hidden_layers", self.model_tester.num_hidden_layers + 1 ) self.assertEqual(len(output_hidden_states), expected_num_layers) self.assertListEqual( list(output_hidden_states[0].shape[-2:]), [self.model_tester.seq_length, self.model_tester.hidden_size], ) self.assertEqual(len(output_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(output_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads, encoder_seq_length, encoder_key_length], ) @slow def test_mixed_precision(self): tf.keras.mixed_precision.set_global_policy("mixed_float16") try: config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes[:2]: class_inputs_dict = self._prepare_for_class(inputs_dict, model_class) model = model_class(config) outputs = model(class_inputs_dict) self.assertIsNotNone(outputs) finally: tf.keras.mixed_precision.set_global_policy("float32") @slow def test_train_pipeline_custom_model(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() if "head_mask" in inputs_dict: del inputs_dict["head_mask"] if "decoder_head_mask" in inputs_dict: del inputs_dict["decoder_head_mask"] if "cross_attn_head_mask" in inputs_dict: del inputs_dict["cross_attn_head_mask"] tf_main_layer_classes = { module_member for model_class in self.all_model_classes for module in (import_module(model_class.__module__),) for module_member_name in dir(module) if module_member_name.endswith("MainLayer") for module_member in (getattr(module, module_member_name),) if isinstance(module_member, type) and tf.keras.layers.Layer in module_member.__bases__ and getattr(module_member, "_keras_serializable", False) } for main_layer_class in tf_main_layer_classes: if "T5" in main_layer_class.__name__: shared = TFSharedEmbeddings(self.model_tester.vocab_size, self.model_tester.hidden_size, name="shared") config.use_cache = False main_layer = main_layer_class(config, embed_tokens=shared) else: main_layer = main_layer_class(config) symbolic_inputs = { name: tf.keras.Input(tensor.shape[1:], dtype=tensor.dtype) for name, tensor in inputs_dict.items() } if hasattr(self.model_tester, "num_labels"): num_labels = self.model_tester.num_labels else: num_labels = 2 X = tf.data.Dataset.from_tensor_slices( (inputs_dict, np.ones((self.model_tester.batch_size, self.model_tester.seq_length, num_labels, 1))) ).batch(1) hidden_states = main_layer(symbolic_inputs)[0] outputs = tf.keras.layers.Dense(num_labels, activation="softmax", name="outputs")(hidden_states) model = tf.keras.models.Model(inputs=symbolic_inputs, outputs=[outputs]) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["binary_accuracy"]) model.fit(X, epochs=1) with tempfile.TemporaryDirectory() as tmpdirname: filepath = os.path.join(tmpdirname, "keras_model.h5") model.save(filepath) if "T5" in main_layer_class.__name__: model = tf.keras.models.load_model( filepath, custom_objects={ main_layer_class.__name__: main_layer_class, "TFSharedEmbeddings": TFSharedEmbeddings, }, ) else: model = tf.keras.models.load_model( filepath, custom_objects={main_layer_class.__name__: main_layer_class} ) assert isinstance(model, tf.keras.Model) model(inputs_dict) @slow def test_graph_mode_with_inputs_embeds(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes[:2]: model = model_class(config) inputs = copy.deepcopy(inputs_dict) if not self.is_encoder_decoder: input_ids = inputs["input_ids"] del inputs["input_ids"] else: encoder_input_ids = inputs["input_ids"] decoder_input_ids = inputs.get("decoder_input_ids", encoder_input_ids) del inputs["input_ids"] inputs.pop("decoder_input_ids", None) if not self.is_encoder_decoder: inputs["inputs_embeds"] = model.get_input_embeddings()(input_ids) else: inputs["inputs_embeds"] = model.get_input_embeddings()(encoder_input_ids) inputs["decoder_inputs_embeds"] = model.get_input_embeddings()(decoder_input_ids) inputs = self._prepare_for_class(inputs, model_class) @tf.function def run_in_graph_mode(): return model(inputs) outputs = run_in_graph_mode() self.assertIsNotNone(outputs) def _generate_random_bad_tokens(self, num_bad_tokens, model): special_tokens = [] if model.config.bos_token_id is not None: special_tokens.append(model.config.bos_token_id) if model.config.pad_token_id is not None: special_tokens.append(model.config.pad_token_id) if model.config.eos_token_id is not None: special_tokens.append(model.config.eos_token_id) bad_tokens = [] while len(bad_tokens) < num_bad_tokens: token = tf.squeeze(ids_tensor((1, 1), self.model_tester.vocab_size), 0).numpy()[0] if token not in special_tokens: bad_tokens.append(token) return bad_tokens def _check_generated_ids(self, output_ids): for token_id in output_ids[0].numpy().tolist(): self.assertGreaterEqual(token_id, 0) self.assertLess(token_id, self.model_tester.vocab_size) def _check_match_tokens(self, generated_ids, bad_words_ids): for bad_word_ids in bad_words_ids: for generated_ids_slice in generated_ids: for i in range(len(bad_word_ids), len(generated_ids_slice)): if generated_ids_slice[i - len(bad_word_ids) : i] == bad_word_ids: return True return False
codingutf8 2019present the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this test validates that we can stack skip decorators in groups and whether they work correctly with other decorators since the decorators have already built their decision params like checking env we can t mock the env and test each of the combinations so ideally the following 4 should be run but since we have different ci jobs running different configs all combinations should get covered runslow1 pytest ra teststestskipdecorators py runslow1 cudavisibledevices pytest ra teststestskipdecorators py runslow0 pytest ra teststestskipdecorators py runslow0 cudavisibledevices pytest ra teststestskipdecorators py skipping in unittest tests test that we can stack our skip decorators with 3rd party decorators test that we can stack our skip decorators the combination of any skip decorator followed by parameterized fails to skip the tests 1 slow manages to correctly skip testparamslowfirst 2 but then parameterized creates new tests with a unique name for each parameter groups it has no idea that they are to be skipped and so they all run ignoring slow therefore skip decorators must come after parameterized slow parameterized expandparams def testparamslowfirstself paramnone checkslow this works as expected 1 parameterized creates new tests with unique names 2 each of them gets an opportunity to be skipped skipping in nonunittest tests no problem at all here coding utf 8 2019 present the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this test validates that we can stack skip decorators in groups and whether they work correctly with other decorators since the decorators have already built their decision params like checking env we can t mock the env and test each of the combinations so ideally the following 4 should be run but since we have different ci jobs running different configs all combinations should get covered run_slow 1 pytest ra tests test_skip_decorators py run_slow 1 cuda_visible_devices pytest ra tests test_skip_decorators py run_slow 0 pytest ra tests test_skip_decorators py run_slow 0 cuda_visible_devices pytest ra tests test_skip_decorators py skipping in unittest tests test that we can stack our skip decorators with 3rd party decorators test that we can stack our skip decorators the combination of any skip decorator followed by parameterized fails to skip the tests 1 slow manages to correctly skip test_param_slow_first 2 but then parameterized creates new tests with a unique name for each parameter groups it has no idea that they are to be skipped and so they all run ignoring slow therefore skip decorators must come after parameterized slow parameterized expand params def test_param_slow_first self param none check_slow this works as expected 1 parameterized creates new tests with unique names 2 each of them gets an opportunity to be skipped skipping in non unittest tests no problem at all here
import os import unittest import pytest from parameterized import parameterized from transformers.testing_utils import require_torch, require_torch_gpu, slow, torch_device params = [(1,)] def check_slow(): run_slow = bool(os.getenv("RUN_SLOW", 0)) if run_slow: assert True else: assert False, "should have been skipped" def check_slow_torch_cuda(): run_slow = bool(os.getenv("RUN_SLOW", 0)) if run_slow and torch_device == "cuda": assert True else: assert False, "should have been skipped" @require_torch class SkipTester(unittest.TestCase): @slow @require_torch_gpu def test_2_skips_slow_first(self): check_slow_torch_cuda() @require_torch_gpu @slow def test_2_skips_slow_last(self): check_slow_torch_cuda() @parameterized.expand(params) @slow def test_param_slow_last(self, param=None): check_slow() @slow @require_torch_gpu def test_pytest_2_skips_slow_first(): check_slow_torch_cuda() @require_torch_gpu @slow def test_pytest_2_skips_slow_last(): check_slow_torch_cuda() @slow @pytest.mark.parametrize("param", [1]) def test_pytest_param_slow_first(param): check_slow() @pytest.mark.parametrize("param", [1]) @slow def test_pytest_param_slow_last(param): check_slow()
2020 the huggingface team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license lt different version strings le eq ne ge gt mix requirement wo version unmet requirements due to version conflict unmet requirements due to missing module bogus requirements formats 1 whole thing 2 only operators matching requirement not matching requirements 2020 the huggingface team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license lt different version strings le eq ne ge gt mix requirement w o version unmet requirements due to version conflict unmet requirements due to missing module bogus requirements formats 1 whole thing 2 only operators matching requirement not matching requirements
import importlib.metadata import sys from transformers.testing_utils import TestCasePlus from transformers.utils.versions import require_version, require_version_core numpy_ver = importlib.metadata.version("numpy") python_ver = ".".join([str(x) for x in sys.version_info[:3]]) class DependencyVersionCheckTest(TestCasePlus): def test_core(self): require_version_core("numpy<1000.4.5") require_version_core("numpy<1000.4") require_version_core("numpy<1000") require_version_core("numpy<=1000.4.5") require_version_core(f"numpy<={numpy_ver}") require_version_core(f"numpy=={numpy_ver}") require_version_core("numpy!=1000.4.5") require_version_core("numpy>=1.0") require_version_core("numpy>=1.0.0") require_version_core(f"numpy>={numpy_ver}") require_version_core("numpy>1.0.0") require_version_core("numpy>1.0.0,<1000") require_version_core("numpy") for req in ["numpy==1.0.0", "numpy>=1000.0.0", f"numpy<{numpy_ver}"]: try: require_version_core(req) except ImportError as e: self.assertIn(f"{req} is required", str(e)) self.assertIn("but found", str(e)) for req in ["numpipypie>1", "numpipypie2"]: try: require_version_core(req) except importlib.metadata.PackageNotFoundError as e: self.assertIn(f"The '{req}' distribution was not found and is required by this application", str(e)) self.assertIn("Try: pip install transformers -U", str(e)) for req in ["numpy??1.0.0", "numpy1.0.0"]: try: require_version_core(req) except ValueError as e: self.assertIn("requirement needs to be in the pip package format", str(e)) for req in ["numpy=1.0.0", "numpy == 1.00", "numpy<>1.0.0", "numpy><1.00", "numpy>>1.0.0"]: try: require_version_core(req) except ValueError as e: self.assertIn("need one of ", str(e)) def test_python(self): require_version("python>=3.6.0") for req in ["python>9.9.9", "python<3.0.0"]: try: require_version_core(req) except ImportError as e: self.assertIn(f"{req} is required", str(e)) self.assertIn(f"but found python=={python_ver}", str(e))
codingutf8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license a script to add andor update the attribute pipelinemodelmapping in model test files this script will be mostly used in the following 2 situations run within a scheduled ci job to check if model test files in the library have updated pipelinemodelmapping andor update test files and possibly open a github pull request automatically being run by a transformers member to quickly check and update some particular test files this script is not intended to be run manually by community contributors do not add item to this set unless the reason is approved infer the framework from the test class testclass if modeltestermixin in x name for x in testclass bases return pt elif tfmodeltestermixin in x name for x in testclass bases return tf elif flaxmodeltestermixin in x name for x in testclass bases return flax else return none def getmappingfortasktask framework use the cached results cache the results get the model architectures related to the test class testclass for a pipeline task framework getframeworktestclass if framework is none return none mapping getmappingfortasktask framework if mapping is none return none configclasses listmodelclass configclass for modelclass in testclass allmodelclasses if lenconfigclasses 1 raise valueerrorthere should be exactly one configuration class from testclass allmodelclasses this could be a listtuple of model classes but it s rare modelclass mapping getconfigclasses0 none if isinstancemodelclass tuple list modelclass sortedmodelclass keylambda x x name return modelclass def getpipelinemodelmappingtestclass get pipelinemodelmapping for testclass as a string to be added to the test file this will be a 1line string after this is added to a test file make style will format it beautifully a listtuple of model classes a single model class restrict to xxxmodeltestermixin and should be a subclass of unittest testcase baseclassnames modeltestermixin tfmodeltestermixin flaxmodeltestermixin if not issubclasstestclass unittest testcase return false return lenbaseclassnames intersectionx name for x in testclass bases 0 def findtestclasstestfile if a test class has defined pipelinemodelmapping let s take it take the test class with the shortest name just a heuristic outside the definition block of pipelinemodelmapping add pipelinemodelmapping to testclass if getattrtestclass pipelinemodelmapping none is not none if not overwrite return 1 linetoadd getpipelinemodelmappingstringtestclass if lenlinetoadd 0 return 1 linetoadd linetoadd n the code defined the class testclass classlines classstartlineno inspect getsourcelinestestclass inspect gives the code for an object including decorators if any we only need the exact line of the class definition for idx line in enumerateclasslines if line lstrip startswithclass classlines classlinesidx classstartlineno idx break classendlineno classstartlineno lenclasslines 1 the index in classlines that starts the definition of allmodelclasses allgenerativemodelclasses or pipelinemodelmapping this assumes they are defined in such order and we take the start index of the last block that appears in a testclass startidx none the indent level of the line at classlinesstartidx if defined indentlevel 0 to record if pipelinemodelmapping is found in testclass defline none for idx line in enumerateclasslines if line strip startswithallmodelclasses indentlevel lenline lenline lstrip startidx idx elif line strip startswithallgenerativemodelclasses indentlevel lenline lenline lstrip startidx idx elif line strip startswithpipelinemodelmapping indentlevel lenline lenline lstrip startidx idx defline line break if startidx is none return 1 find the ending index inclusive of the above found block endidx findblockendingclasslines startidx indentlevel extract isxxxavailable from existing blocks some models require specific libraries like timm and use istimmavailable instead of istorchavailable keep leading and trailing whitespaces r re compilersiss availables for line in classlinesstartidx endidx 1 backendcondition r searchline if backendcondition is not none replace the leading and trailing whitespaces to the space character target backendcondition01 1 linetoadd r subtarget linetoadd break if defline is none pipelinemodelmapping is not defined the target index is set to the ending index inclusive of allmodelclasses or allgenerativemodelclasses targetidx endidx else pipelinemodelmapping is defined the target index is set to be one before its start index targetidx startidx 1 mark the lines of the currently existing pipelinemodelmapping to be removed for idx in rangestartidx endidx 1 these lines are going to be removed before writing to the test file classlinesidx none noqa make sure the test class is a subclass of pipelinetestermixin parentclasses x name for x in testclass bases if pipelinetestermixin not in parentclasses put pipelinetestermixin just before unittest testcase parentclasses x for x in parentclasses if x testcase pipelinetestermixin if testcase in parentclasses here we assume the original string is always with unittest testcase parentclasses appendunittest testcase parentclasses joinparentclasses for idx line in enumerateclasslines find the ending of the declaration of testclass if line strip endswith mark the lines of the declaration of testclass to be removed for idx in rangeidx 1 classlinesidx none noqa break add the new oneline class declaration for testclass classlines0 fclass testclass nameparentclasses n add indentation linetoadd indentlevel linetoadd insert pipelinemodelmapping to classlines the line at targetidx should be kept by definition classlines classlines targetidx 1 linetoadd classlinestargetidx 1 remove the lines that are marked to be removed classlines x for x in classlines if x is not none move from test class to module in order to write to the test file modulelines inspect getsourcelinesinspect getmoduletestclass0 be careful with the 1off between line numbers and array indices modulelines modulelines classstartlineno 1 classlines modulelinesclassendlineno code joinmodulelines moddulefile inspect getsourcefiletestclass with openmoddulefile w encodingutf8 newlinen as fp fp writecode return linetoadd def addpipelinemodelmappingtotestfiletestfile overwritefalse flax is not concerned at this moment coding utf 8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license a script to add and or update the attribute pipeline_model_mapping in model test files this script will be mostly used in the following 2 situations run within a scheduled ci job to check if model test files in the library have updated pipeline_model_mapping and or update test files and possibly open a github pull request automatically being run by a transformers member to quickly check and update some particular test file s this script is not intended to be run manually by community contributors do not add item to this set unless the reason is approved the pipeline test mapping is added to test_modeling_esm py infer the framework from the test class test_class get mappings defined in xxxpipelinetests for the task task use the cached results cache the results get the model architecture s related to the test class test_class for a pipeline task this could be a list tuple of model classes but it s rare get pipeline_model_mapping for test_class get pipeline_model_mapping for test_class as a string to be added to the test file this will be a 1 line string after this is added to a test file make style will format it beautifully a list tuple of model classes a single model class restrict to xxxmodeltestermixin and should be a subclass of unittest testcase find a test class in test_file to which we will add pipeline_model_mapping if a test class has defined pipeline_model_mapping let s take it take the test class with the shortest name just a heuristic outside the definition block of pipeline_model_mapping add pipeline_model_mapping to test_class the code defined the class test_class inspect gives the code for an object including decorator s if any we only need the exact line of the class definition the index in class_lines that starts the definition of all_model_classes all_generative_model_classes or pipeline_model_mapping this assumes they are defined in such order and we take the start index of the last block that appears in a test_class the indent level of the line at class_lines start_idx if defined to record if pipeline_model_mapping is found in test_class find the ending index inclusive of the above found block extract is_xxx_available from existing blocks some models require specific libraries like timm and use is_timm_available instead of is_torch_available keep leading and trailing whitespaces replace the leading and trailing whitespaces to the space character pipeline_model_mapping is not defined the target index is set to the ending index inclusive of all_model_classes or all_generative_model_classes pipeline_model_mapping is defined the target index is set to be one before its start index mark the lines of the currently existing pipeline_model_mapping to be removed these lines are going to be removed before writing to the test file noqa make sure the test class is a subclass of pipelinetestermixin put pipelinetestermixin just before unittest testcase here we assume the original string is always with unittest testcase find the ending of the declaration of test_class mark the lines of the declaration of test_class to be removed noqa add the new one line class declaration for test_class add indentation insert pipeline_model_mapping to class_lines the line at target_idx should be kept by definition remove the lines that are marked to be removed move from test class to module in order to write to the test file be careful with the 1 off between line numbers and array indices add pipeline_model_mapping to test_file flax is not concerned at this moment
import argparse import glob import inspect import os import re import unittest from get_test_info import get_test_classes from tests.test_pipeline_mixin import pipeline_test_mapping PIPELINE_TEST_MAPPING = {} for task, _ in pipeline_test_mapping.items(): PIPELINE_TEST_MAPPING[task] = {"pt": None, "tf": None} TEST_FILE_TO_IGNORE = { "tests/models/esm/test_modeling_esmfold.py", } def get_framework(test_class): if "ModelTesterMixin" in [x.__name__ for x in test_class.__bases__]: return "pt" elif "TFModelTesterMixin" in [x.__name__ for x in test_class.__bases__]: return "tf" elif "FlaxModelTesterMixin" in [x.__name__ for x in test_class.__bases__]: return "flax" else: return None def get_mapping_for_task(task, framework): if PIPELINE_TEST_MAPPING[task].get(framework, None) is not None: return PIPELINE_TEST_MAPPING[task][framework] pipeline_test_class = pipeline_test_mapping[task]["test"] mapping = None if framework == "pt": mapping = getattr(pipeline_test_class, "model_mapping", None) elif framework == "tf": mapping = getattr(pipeline_test_class, "tf_model_mapping", None) if mapping is not None: mapping = dict(mapping.items()) PIPELINE_TEST_MAPPING[task][framework] = mapping return mapping def get_model_for_pipeline_test(test_class, task): framework = get_framework(test_class) if framework is None: return None mapping = get_mapping_for_task(task, framework) if mapping is None: return None config_classes = list({model_class.config_class for model_class in test_class.all_model_classes}) if len(config_classes) != 1: raise ValueError("There should be exactly one configuration class from `test_class.all_model_classes`.") model_class = mapping.get(config_classes[0], None) if isinstance(model_class, (tuple, list)): model_class = sorted(model_class, key=lambda x: x.__name__) return model_class def get_pipeline_model_mapping(test_class): mapping = [(task, get_model_for_pipeline_test(test_class, task)) for task in pipeline_test_mapping] mapping = sorted([(task, model) for task, model in mapping if model is not None], key=lambda x: x[0]) return dict(mapping) def get_pipeline_model_mapping_string(test_class): framework = get_framework(test_class) if framework == "pt": framework = "torch" default_value = "{}" mapping = get_pipeline_model_mapping(test_class) if len(mapping) == 0: return "" texts = [] for task, model_classes in mapping.items(): if isinstance(model_classes, (tuple, list)): value = "(" + ", ".join([x.__name__ for x in model_classes]) + ")" else: value = model_classes.__name__ texts.append(f'"{task}": {value}') text = "{" + ", ".join(texts) + "}" text = f"pipeline_model_mapping = {text} if is_{framework}_available() else {default_value}" return text def is_valid_test_class(test_class): base_class_names = {"ModelTesterMixin", "TFModelTesterMixin", "FlaxModelTesterMixin"} if not issubclass(test_class, unittest.TestCase): return False return len(base_class_names.intersection([x.__name__ for x in test_class.__bases__])) > 0 def find_test_class(test_file): test_classes = [x for x in get_test_classes(test_file) if is_valid_test_class(x)] target_test_class = None for test_class in test_classes: if getattr(test_class, "pipeline_model_mapping", None) is not None: target_test_class = test_class break if target_test_class is None and len(test_classes) > 0: target_test_class = sorted(test_classes, key=lambda x: (len(x.__name__), x.__name__))[0] return target_test_class def find_block_ending(lines, start_idx, indent_level): end_idx = start_idx for idx, line in enumerate(lines[start_idx:]): indent = len(line) - len(line.lstrip()) if idx == 0 or indent > indent_level or (indent == indent_level and line.strip() == ")"): end_idx = start_idx + idx elif idx > 0 and indent <= indent_level: break return end_idx def add_pipeline_model_mapping(test_class, overwrite=False): if getattr(test_class, "pipeline_model_mapping", None) is not None: if not overwrite: return "", -1 line_to_add = get_pipeline_model_mapping_string(test_class) if len(line_to_add) == 0: return "", -1 line_to_add = line_to_add + "\n" class_lines, class_start_line_no = inspect.getsourcelines(test_class) for idx, line in enumerate(class_lines): if line.lstrip().startswith("class "): class_lines = class_lines[idx:] class_start_line_no += idx break class_end_line_no = class_start_line_no + len(class_lines) - 1 start_idx = None indent_level = 0 def_line = None for idx, line in enumerate(class_lines): if line.strip().startswith("all_model_classes = "): indent_level = len(line) - len(line.lstrip()) start_idx = idx elif line.strip().startswith("all_generative_model_classes = "): indent_level = len(line) - len(line.lstrip()) start_idx = idx elif line.strip().startswith("pipeline_model_mapping = "): indent_level = len(line) - len(line.lstrip()) start_idx = idx def_line = line break if start_idx is None: return "", -1 end_idx = find_block_ending(class_lines, start_idx, indent_level) r = re.compile(r"\s(is_\S+?_available\(\))\s") for line in class_lines[start_idx : end_idx + 1]: backend_condition = r.search(line) if backend_condition is not None: target = " " + backend_condition[0][1:-1] + " " line_to_add = r.sub(target, line_to_add) break if def_line is None: target_idx = end_idx else: target_idx = start_idx - 1 for idx in range(start_idx, end_idx + 1): class_lines[idx] = None parent_classes = [x.__name__ for x in test_class.__bases__] if "PipelineTesterMixin" not in parent_classes: _parent_classes = [x for x in parent_classes if x != "TestCase"] + ["PipelineTesterMixin"] if "TestCase" in parent_classes: _parent_classes.append("unittest.TestCase") parent_classes = ", ".join(_parent_classes) for idx, line in enumerate(class_lines): if line.strip().endswith("):"): for _idx in range(idx + 1): class_lines[_idx] = None break class_lines[0] = f"class {test_class.__name__}({parent_classes}):\n" line_to_add = " " * indent_level + line_to_add class_lines = class_lines[: target_idx + 1] + [line_to_add] + class_lines[target_idx + 1 :] class_lines = [x for x in class_lines if x is not None] module_lines = inspect.getsourcelines(inspect.getmodule(test_class))[0] module_lines = module_lines[: class_start_line_no - 1] + class_lines + module_lines[class_end_line_no:] code = "".join(module_lines) moddule_file = inspect.getsourcefile(test_class) with open(moddule_file, "w", encoding="UTF-8", newline="\n") as fp: fp.write(code) return line_to_add def add_pipeline_model_mapping_to_test_file(test_file, overwrite=False): test_class = find_test_class(test_file) if test_class: add_pipeline_model_mapping(test_class, overwrite=overwrite) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--test_file", type=str, help="A path to the test file, starting with the repository's `tests` directory." ) parser.add_argument( "--all", action="store_true", help="If to check and modify all test files.", ) parser.add_argument( "--overwrite", action="store_true", help="If to overwrite a test class if it has already defined `pipeline_model_mapping`.", ) args = parser.parse_args() if not args.all and not args.test_file: raise ValueError("Please specify either `test_file` or pass `--all` to check/modify all test files.") elif args.all and args.test_file: raise ValueError("Only one of `--test_file` and `--all` could be specified.") test_files = [] if args.test_file: test_files = [args.test_file] else: pattern = os.path.join("tests", "models", "**", "test_modeling_*.py") for test_file in glob.glob(pattern): if not test_file.startswith("test_modeling_flax_"): test_files.append(test_file) for test_file in test_files: if test_file in TEST_FILE_TO_IGNORE: print(f"[SKIPPED] {test_file} is skipped as it is in `TEST_FILE_TO_IGNORE` in the file {__file__}.") continue add_pipeline_model_mapping_to_test_file(test_file, overwrite=args.overwrite)
codingutf8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license test all the extensions added in the setup test all the extensions added in the setup coding utf 8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license test all the extensions added in the setup test all the extensions added in the setup
import argparse import importlib from pathlib import Path FILES_TO_FIND = [ "kernels/rwkv/wkv_cuda.cu", "kernels/rwkv/wkv_op.cpp", "kernels/deformable_detr/ms_deform_attn.h", "kernels/deformable_detr/cuda/ms_deform_im2col_cuda.cuh", "models/graphormer/algos_graphormer.pyx", ] def test_custom_files_are_present(transformers_path): for file in FILES_TO_FIND: if not (transformers_path / file).exists(): return False return True if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--check_lib", action="store_true", help="Whether to check the build or the actual package.") args = parser.parse_args() if args.check_lib: transformers_module = importlib.import_module("transformers") transformers_path = Path(transformers_module.__file__).parent else: transformers_path = Path.cwd() / "build/lib/transformers" if not test_custom_files_are_present(transformers_path): raise ValueError("The built release does not contain the custom files. Fix this before going further!")
codingutf8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license all paths are set with the intent you should run this script from the root of the repo with the command python utilscheckconfigdocstrings py this is to make sure the transformers module imported is the one in the repo used to compute the property self chunklength used as self bertmodel bertmodelconfig not used in modeling files but it s an important information used internally in the configuration class file used internally in the configuration class file used during training despite we don t have training script for these models yet ignorevalue used during training despite we don t have training script for these models yet norm used in conversion script despite not using in the modeling file used during preprocessing and collation see collatinggraphormer py used internally in the configuration class file used internally in the configuration class file tokenizerclass get default value t5tokenizer intentionally used internally in the configuration class file used internally in the configuration class file used internally in the configuration class file having default values other than 1e5 we can t fix them without breaking having default values other than 1e5 we can t fix them without breaking having default values other than 1e5 we can t fix them without breaking having default values other than 1e5 we can t fix them without breaking having default values other than 1e5 we can t fix them without breaking used internally to calculate the feature size used internally to calculate the feature size used internally to calculate the feature size used internally to calculate mlpdim for head training but so far not implemented not used but providing useful information to users actually used in the config or generation config in that case necessary for the subcomponents generation actually used in the config or generation config in that case necessary for the subcomponents generation todo ydshieh check the failing cases try to fix them or move some cases to the above block once we are sure for backward compatibility with trust remote code models todo arthur for alignmenthead and alignmentlayer todo younes for isdecoder check if any name in attributes is used in one of the strings in sourcestrings args configclass type the configuration class for which the arguments in its init will be checked attributes liststr the name of an argument or attribute and its variant names if any defaultvalue any a default value for the attribute in attributes assigned in the init of configclass sourcestrings liststr the python source code strings in the same modeling directory where configclass is defined the file containing the definition of configclass should be excluded check if we can find config xxx getattrconfig xxx or getattrself config xxx deal with multiline cases sequencesummary is called with sequencesummaryconfig common and important attributes even if they do not always appear in the modeling files special cases to be allowed allow if the default value in the configuration class is different from the one in pretrainedconfig allow cases without checking the default value in the configuration class configuration class specific cases check the arguments in init of configclass are used in the modeling files in the same directory args configclass type the configuration class for which the arguments in its init will be checked get the parameters in init of the configuration class and the default values if any if attributemap exists an attribute can have different names to be used in the modeling files and as long as one variant is used the test should pass get the path to modeling source files let s check against all frameworks as long as one framework uses an attribute we are good get the source code strings attributes here is all the variant names for configparam some configuration classes have nonempty attributemap and both names could be used in the corresponding modeling files as long as one of them appears it is fine check the arguments in init of all configuration classes are used in python files configswithunusedattributes for configclass in listconfigmapping values skip deprecated models if models deprecated in configclass module continue some config classes are not in configmapping e g clipvisionconfig blip2visionconfig etc configclassesinmodule cls for name cls in inspect getmembers inspect getmoduleconfigclass lambda x inspect isclassx and issubclassx pretrainedconfig and inspect getmodulex inspect getmoduleconfigclass for configclass in configclassesinmodule unusedattributes checkconfigattributesbeingusedconfigclass if lenunusedattributes 0 configswithunusedattributesconfigclass name unusedattributes if lenconfigswithunusedattributes 0 error the following configuration classes contain unused attributes in the corresponding modeling files n for name attributes in configswithunusedattributes items error fname attributesn raise valueerrorerror if name main checkconfigattributes coding utf 8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license all paths are set with the intent you should run this script from the root of the repo with the command python utils check_config_docstrings py this is to make sure the transformers module imported is the one in the repo used to compute the property self chunk_length used as self bert_model bertmodel config not used in modeling files but it s an important information used internally in the configuration class file used internally in the configuration class file used during training despite we don t have training script for these models yet ignore_value used during training despite we don t have training script for these models yet norm used in conversion script despite not using in the modeling file used during preprocessing and collation see collating_graphormer py used internally in the configuration class file used internally in the configuration class file tokenizer_class get default value t5tokenizer intentionally used internally in the configuration class file used internally in the configuration class file used internally in the configuration class file having default values other than 1e 5 we can t fix them without breaking having default values other than 1e 5 we can t fix them without breaking having default values other than 1e 5 we can t fix them without breaking having default values other than 1e 5 we can t fix them without breaking having default values other than 1e 5 we can t fix them without breaking used internally to calculate the feature size used internally to calculate the feature size used internally to calculate the feature size used internally to calculate mlp_dim for head training but so far not implemented not used but providing useful information to users actually used in the config or generation config in that case necessary for the sub components generation actually used in the config or generation config in that case necessary for the sub components generation todo ydshieh check the failing cases try to fix them or move some cases to the above block once we are sure for backward compatibility with trust remote code models todo arthur for alignment_head and alignment_layer todo younes for is_decoder check if any name in attributes is used in one of the strings in source_strings args config_class type the configuration class for which the arguments in its __init__ will be checked attributes list str the name of an argument or attribute and its variant names if any default_value any a default value for the attribute in attributes assigned in the __init__ of config_class source_strings list str the python source code strings in the same modeling directory where config_class is defined the file containing the definition of config_class should be excluded check if we can find config xxx getattr config xxx or getattr self config xxx deal with multi line cases sequencesummary is called with sequencesummary config common and important attributes even if they do not always appear in the modeling files special cases to be allowed allow if the default value in the configuration class is different from the one in pretrainedconfig allow cases without checking the default value in the configuration class configuration class specific cases check the arguments in __init__ of config_class are used in the modeling files in the same directory args config_class type the configuration class for which the arguments in its __init__ will be checked get the parameters in __init__ of the configuration class and the default values if any if attribute_map exists an attribute can have different names to be used in the modeling files and as long as one variant is used the test should pass get the path to modeling source files let s check against all frameworks as long as one framework uses an attribute we are good get the source code strings attributes here is all the variant names for config_param some configuration classes have non empty attribute_map and both names could be used in the corresponding modeling files as long as one of them appears it is fine check the arguments in __init__ of all configuration classes are used in python files skip deprecated models some config classes are not in config_mapping e g clipvisionconfig blip2visionconfig etc
import inspect import os import re from transformers.configuration_utils import PretrainedConfig from transformers.utils import direct_transformers_import PATH_TO_TRANSFORMERS = "src/transformers" transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) CONFIG_MAPPING = transformers.models.auto.configuration_auto.CONFIG_MAPPING SPECIAL_CASES_TO_ALLOW = { "EncodecConfig": ["overlap"], "DPRConfig": True, "FuyuConfig": True, "FSMTConfig": ["langs"], "GPTNeoConfig": ["attention_types"], "EsmConfig": ["is_folding_model"], "Mask2FormerConfig": ["ignore_value"], "OneFormerConfig": ["ignore_value", "norm"], "GraphormerConfig": ["spatial_pos_max"], "T5Config": ["feed_forward_proj"], "MT5Config": ["feed_forward_proj", "tokenizer_class"], "UMT5Config": ["feed_forward_proj", "tokenizer_class"], "LongT5Config": ["feed_forward_proj"], "Pop2PianoConfig": ["feed_forward_proj"], "SwitchTransformersConfig": ["feed_forward_proj"], "BioGptConfig": ["layer_norm_eps"], "GLPNConfig": ["layer_norm_eps"], "SegformerConfig": ["layer_norm_eps"], "CvtConfig": ["layer_norm_eps"], "PerceiverConfig": ["layer_norm_eps"], "InformerConfig": ["num_static_real_features", "num_time_features"], "TimeSeriesTransformerConfig": ["num_static_real_features", "num_time_features"], "AutoformerConfig": ["num_static_real_features", "num_time_features"], "SamVisionConfig": ["mlp_ratio"], "ClapAudioConfig": ["num_classes"], "SpeechT5HifiGanConfig": ["sampling_rate"], "SeamlessM4TConfig": [ "max_new_tokens", "t2u_max_new_tokens", "t2u_decoder_attention_heads", "t2u_decoder_ffn_dim", "t2u_decoder_layers", "t2u_encoder_attention_heads", "t2u_encoder_ffn_dim", "t2u_encoder_layers", "t2u_max_position_embeddings", ], "SeamlessM4Tv2Config": [ "max_new_tokens", "t2u_decoder_attention_heads", "t2u_decoder_ffn_dim", "t2u_decoder_layers", "t2u_encoder_attention_heads", "t2u_encoder_ffn_dim", "t2u_encoder_layers", "t2u_max_position_embeddings", "t2u_variance_pred_dropout", "t2u_variance_predictor_embed_dim", "t2u_variance_predictor_hidden_dim", "t2u_variance_predictor_kernel_size", ], } SPECIAL_CASES_TO_ALLOW.update( { "CLIPSegConfig": True, "DeformableDetrConfig": True, "DetaConfig": True, "DinatConfig": True, "DonutSwinConfig": True, "EfficientFormerConfig": True, "FSMTConfig": True, "JukeboxConfig": True, "LayoutLMv2Config": True, "MaskFormerSwinConfig": True, "MT5Config": True, "MptConfig": True, "MptAttentionConfig": True, "NatConfig": True, "OneFormerConfig": True, "PerceiverConfig": True, "RagConfig": True, "SpeechT5Config": True, "SwinConfig": True, "Swin2SRConfig": True, "Swinv2Config": True, "SwitchTransformersConfig": True, "TableTransformerConfig": True, "TapasConfig": True, "UniSpeechConfig": True, "UniSpeechSatConfig": True, "WavLMConfig": True, "WhisperConfig": True, "JukeboxPriorConfig": True, "Pix2StructTextConfig": True, "IdeficsConfig": True, "IdeficsVisionConfig": True, "IdeficsPerceiverConfig": True, } ) def check_attribute_being_used(config_class, attributes, default_value, source_strings): attribute_used = False for attribute in attributes: for modeling_source in source_strings: if ( f"config.{attribute}" in modeling_source or f'getattr(config, "{attribute}"' in modeling_source or f'getattr(self.config, "{attribute}"' in modeling_source ): attribute_used = True elif ( re.search( rf'getattr[ \t\v\n\r\f]*\([ \t\v\n\r\f]*(self\.)?config,[ \t\v\n\r\f]*"{attribute}"', modeling_source, ) is not None ): attribute_used = True elif attribute in [ "summary_type", "summary_use_proj", "summary_activation", "summary_last_dropout", "summary_proj_to_labels", "summary_first_dropout", ]: if "SequenceSummary" in modeling_source: attribute_used = True if attribute_used: break if attribute_used: break attributes_to_allow = [ "bos_index", "eos_index", "pad_index", "unk_index", "mask_index", "image_size", "use_cache", "out_features", "out_indices", "sampling_rate", ] attributes_used_in_generation = ["encoder_no_repeat_ngram_size"] case_allowed = True if not attribute_used: case_allowed = False for attribute in attributes: if attribute in ["is_encoder_decoder"] and default_value is True: case_allowed = True elif attribute in ["tie_word_embeddings"] and default_value is False: case_allowed = True elif attribute in attributes_to_allow + attributes_used_in_generation: case_allowed = True elif attribute.endswith("_token_id"): case_allowed = True if not case_allowed: allowed_cases = SPECIAL_CASES_TO_ALLOW.get(config_class.__name__, []) case_allowed = allowed_cases is True or attribute in allowed_cases return attribute_used or case_allowed def check_config_attributes_being_used(config_class): signature = dict(inspect.signature(config_class.__init__).parameters) parameter_names = [x for x in list(signature.keys()) if x not in ["self", "kwargs"]] parameter_defaults = [signature[param].default for param in parameter_names] reversed_attribute_map = {} if len(config_class.attribute_map) > 0: reversed_attribute_map = {v: k for k, v in config_class.attribute_map.items()} config_source_file = inspect.getsourcefile(config_class) model_dir = os.path.dirname(config_source_file) modeling_paths = [os.path.join(model_dir, fn) for fn in os.listdir(model_dir) if fn.startswith("modeling_")] modeling_sources = [] for path in modeling_paths: if os.path.isfile(path): with open(path, encoding="utf8") as fp: modeling_sources.append(fp.read()) unused_attributes = [] for config_param, default_value in zip(parameter_names, parameter_defaults): attributes = [config_param] if config_param in reversed_attribute_map: attributes.append(reversed_attribute_map[config_param]) if not check_attribute_being_used(config_class, attributes, default_value, modeling_sources): unused_attributes.append(attributes[0]) return sorted(unused_attributes) def check_config_attributes(): configs_with_unused_attributes = {} for _config_class in list(CONFIG_MAPPING.values()): if "models.deprecated" in _config_class.__module__: continue config_classes_in_module = [ cls for name, cls in inspect.getmembers( inspect.getmodule(_config_class), lambda x: inspect.isclass(x) and issubclass(x, PretrainedConfig) and inspect.getmodule(x) == inspect.getmodule(_config_class), ) ] for config_class in config_classes_in_module: unused_attributes = check_config_attributes_being_used(config_class) if len(unused_attributes) > 0: configs_with_unused_attributes[config_class.__name__] = unused_attributes if len(configs_with_unused_attributes) > 0: error = "The following configuration classes contain unused attributes in the corresponding modeling files:\n" for name, attributes in configs_with_unused_attributes.items(): error += f"{name}: {attributes}\n" raise ValueError(error) if __name__ == "__main__": check_config_attributes()
codingutf8 2022 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license all paths are set with the intent you should run this script from the root of the repo with the command python utilscheckconfigdocstrings py this is to make sure the transformers module imported is the one in the repo regex pattern used to find the checkpoint mentioned in the docstring of configclass for example bertbaseuncasedhttps huggingface cobertbaseuncased source code of configclass each checkpoint is a tuple of a checkpoint name and a checkpoint link for example bertbaseuncased https huggingface cobertbaseuncased allow the link to end with verify the checkpoint name corresponds to the checkpoint link skip deprecated models coding utf 8 2022 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license all paths are set with the intent you should run this script from the root of the repo with the command python utils check_config_docstrings py this is to make sure the transformers module imported is the one in the repo regex pattern used to find the checkpoint mentioned in the docstring of config_class for example bert base uncased https huggingface co bert base uncased source code of config_class each checkpoint is a tuple of a checkpoint name and a checkpoint link for example bert base uncased https huggingface co bert base uncased allow the link to end with verify the checkpoint name corresponds to the checkpoint link skip deprecated models
import inspect import re from transformers.utils import direct_transformers_import PATH_TO_TRANSFORMERS = "src/transformers" transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) CONFIG_MAPPING = transformers.models.auto.configuration_auto.CONFIG_MAPPING _re_checkpoint = re.compile(r"\[(.+?)\]\((https://huggingface\.co/.+?)\)") CONFIG_CLASSES_TO_IGNORE_FOR_DOCSTRING_CHECKPOINT_CHECK = { "DecisionTransformerConfig", "EncoderDecoderConfig", "MusicgenConfig", "RagConfig", "SpeechEncoderDecoderConfig", "TimmBackboneConfig", "VisionEncoderDecoderConfig", "VisionTextDualEncoderConfig", "LlamaConfig", } def get_checkpoint_from_config_class(config_class): checkpoint = None config_source = inspect.getsource(config_class) checkpoints = _re_checkpoint.findall(config_source) for ckpt_name, ckpt_link in checkpoints: if ckpt_link.endswith("/"): ckpt_link = ckpt_link[:-1] ckpt_link_from_name = f"https://huggingface.co/{ckpt_name}" if ckpt_link == ckpt_link_from_name: checkpoint = ckpt_name break return checkpoint def check_config_docstrings_have_checkpoints(): configs_without_checkpoint = [] for config_class in list(CONFIG_MAPPING.values()): if "models.deprecated" in config_class.__module__: continue checkpoint = get_checkpoint_from_config_class(config_class) name = config_class.__name__ if checkpoint is None and name not in CONFIG_CLASSES_TO_IGNORE_FOR_DOCSTRING_CHECKPOINT_CHECK: configs_without_checkpoint.append(name) if len(configs_without_checkpoint) > 0: message = "\n".join(sorted(configs_without_checkpoint)) raise ValueError( f"The following configurations don't contain any valid checkpoint:\n{message}\n\n" "The requirement is to include a link pointing to one of the models of this architecture in the " "docstring of the config classes listed above. The link should have be a markdown format like " "[myorg/mymodel](https://huggingface.co/myorg/mymodel)." ) if __name__ == "__main__": check_config_docstrings_have_checkpoints()
codingutf8 2022 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this script is responsible for cleaning the model section of the table of content by removing duplicates and sorting the entries in alphabetical order usage from the root of the repo check that the table of content is properly sorted used in make quality bash python utilscheckdoctoc py autosort the table of content if it is not properly sorted used in make style bash python utilscheckdoctoc py fixandoverwrite cleans a section of the table of content of the model documentation one specific modality by removing duplicates and sorting models alphabetically args modeldoc listdict the list of dictionaries extracted from the toctree yml file for this specific modality returns listdict list of dictionaries like the input but cleaned up and sorted only add this once add none duplicatekeys sort check that the content of the table of content in toctree yml is clean no duplicates and sorted for the model api doc and potentially autocleans it args overwrite bool optional defaults to false whether to just check if the toc is clean or to autoclean it when overwritetrue get to the api doc then to the model doc extract the modalities and clean them one by one coding utf 8 2022 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this script is responsible for cleaning the model section of the table of content by removing duplicates and sorting the entries in alphabetical order usage from the root of the repo check that the table of content is properly sorted used in make quality bash python utils check_doc_toc py auto sort the table of content if it is not properly sorted used in make style bash python utils check_doc_toc py fix_and_overwrite cleans a section of the table of content of the model documentation one specific modality by removing duplicates and sorting models alphabetically args model_doc list dict the list of dictionaries extracted from the _toctree yml file for this specific modality returns list dict list of dictionaries like the input but cleaned up and sorted only add this once add none duplicate keys sort check that the content of the table of content in _toctree yml is clean no duplicates and sorted for the model api doc and potentially auto cleans it args overwrite bool optional defaults to false whether to just check if the toc is clean or to auto clean it when overwrite true get to the api doc then to the model doc extract the modalities and clean them one by one
import argparse from collections import defaultdict from typing import List import yaml PATH_TO_TOC = "docs/source/en/_toctree.yml" def clean_model_doc_toc(model_doc: List[dict]) -> List[dict]: counts = defaultdict(int) for doc in model_doc: counts[doc["local"]] += 1 duplicates = [key for key, value in counts.items() if value > 1] new_doc = [] for duplicate_key in duplicates: titles = list({doc["title"] for doc in model_doc if doc["local"] == duplicate_key}) if len(titles) > 1: raise ValueError( f"{duplicate_key} is present several times in the documentation table of content at " "`docs/source/en/_toctree.yml` with different *Title* values. Choose one of those and remove the " "others." ) new_doc.append({"local": duplicate_key, "title": titles[0]}) new_doc.extend([doc for doc in model_doc if counts[doc["local"]] == 1]) return sorted(new_doc, key=lambda s: s["title"].lower()) def check_model_doc(overwrite: bool = False): with open(PATH_TO_TOC, encoding="utf-8") as f: content = yaml.safe_load(f.read()) api_idx = 0 while content[api_idx]["title"] != "API": api_idx += 1 api_doc = content[api_idx]["sections"] model_idx = 0 while api_doc[model_idx]["title"] != "Models": model_idx += 1 model_doc = api_doc[model_idx]["sections"] modalities_docs = [(idx, section) for idx, section in enumerate(model_doc) if "sections" in section] diff = False for idx, modality_doc in modalities_docs: old_modality_doc = modality_doc["sections"] new_modality_doc = clean_model_doc_toc(old_modality_doc) if old_modality_doc != new_modality_doc: diff = True if overwrite: model_doc[idx]["sections"] = new_modality_doc if diff: if overwrite: api_doc[model_idx]["sections"] = model_doc content[api_idx]["sections"] = api_doc with open(PATH_TO_TOC, "w", encoding="utf-8") as f: f.write(yaml.dump(content, allow_unicode=True)) else: raise ValueError( "The model doc part of the table of content is not properly sorted, run `make style` to fix this." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() check_model_doc(args.fix_and_overwrite)
codingutf8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this script is responsible for cleaning the list of doctests by making sure the entries all exist and are in alphabetical order usage from the root of the repo check that the doctest list is properly sorted and all files exist used in make repoconsistency bash python utilscheckdoctestlist py autosort the doctest list if it is not properly sorted used in make fixcopies bash python utilscheckdoctestlist py fixandoverwrite all paths are set with the intent you should run this script from the root of the repo with the command python utilscheckdoctestlist py cleans the doctest in a given file args doctestfile str the path to the doctest file to check or clean overwrite bool optional defaults to false whether or not to fix problems if false will error when the file is not clean coding utf 8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this script is responsible for cleaning the list of doctests by making sure the entries all exist and are in alphabetical order usage from the root of the repo check that the doctest list is properly sorted and all files exist used in make repo consistency bash python utils check_doctest_list py auto sort the doctest list if it is not properly sorted used in make fix copies bash python utils check_doctest_list py fix_and_overwrite all paths are set with the intent you should run this script from the root of the repo with the command python utils check_doctest_list py cleans the doctest in a given file args doctest_file str the path to the doctest file to check or clean overwrite bool optional defaults to false whether or not to fix problems if false will error when the file is not clean
import argparse import os REPO_PATH = "." DOCTEST_FILE_PATHS = ["not_doctested.txt", "slow_documentation_tests.txt"] def clean_doctest_list(doctest_file: str, overwrite: bool = False): non_existent_paths = [] all_paths = [] with open(doctest_file, "r", encoding="utf-8") as f: for line in f: line = line.strip().split(" ")[0] path = os.path.join(REPO_PATH, line) if not (os.path.isfile(path) or os.path.isdir(path)): non_existent_paths.append(line) all_paths.append(line) if len(non_existent_paths) > 0: non_existent_paths = "\n".join([f"- {f}" for f in non_existent_paths]) raise ValueError(f"`{doctest_file}` contains non-existent paths:\n{non_existent_paths}") sorted_paths = sorted(all_paths) if all_paths != sorted_paths: if not overwrite: raise ValueError( f"Files in `{doctest_file}` are not in alphabetical order, run `make fix-copies` to fix " "this automatically." ) with open(doctest_file, "w", encoding="utf-8") as f: f.write("\n".join(sorted_paths) + "\n") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() for doctest_file in DOCTEST_FILE_PATHS: doctest_file = os.path.join(REPO_PATH, "utils", doctest_file) clean_doctest_list(doctest_file, args.fix_and_overwrite)
codingutf8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that checks the custom inits of transformers are welldefined transformers uses init files that delay the import of an object to when it s actually needed this is to avoid the main init importing all models which would make the line import transformers very slow when the user has all optional dependencies installed the inits with delayed imports have two halves one definining a dictionary importstructure which maps modules to the name of the objects in each module and one in typechecking which looks like a normal init for typecheckers the goal of this script is to check the objects defined in both halves are the same this also checks the main init properly references all submodules even if it doesn t import anything from them every submodule should be defined as a key of importstructure with an empty list as value potentially or the submodule won t be importable use from the root of the repo with bash python utilscheckinits py for a check that will error in case of inconsistencies used by make repoconsistency there is no autofix possible here sadly path is set with the intent you should run this script from the root of the repo matches isxxxavailable catches a oneline importstruct xxx catches a line with a keyvalues pattern bla foo bar catches a line if not isfooavailable catches a line importstructbla appendfoo catches a line importstructbla extendfoo bar or importstructbla foo bar catches a line with an object between quotes and a comma mymodel catches a line with objects between brackets only foo bar catches a line with from foo import bar bla boo catches a line with try catches a line with else find one or multiple backend in a code line of the init args line str a code line of the main init returns optionalstr if one or several backend is found returns it in the case of multiple backends the line contains if isxxxavailable and isyyyavailable returns all backends joined on and so xxxandyyy for instance read an initfile and parse per backend the importstructure objects defined and the typechecking objects defined args initfile str path to the init file to inspect returns optionaltupledictstr liststr dictstr liststr a tuple of two dictionaries mapping backends to list of imported objects one for the importstructure part of the init and one for the typechecking part of the init returns none if the init is not a custom init get the to importstructure definition if this is a traditional init just return first grab the objects without a specific backend in importstructure if we have everything on a single line let s deal with it those are stored with the key none let s continue with backendspecific objects in importstructure if the line is an if not isbackendavailable we grab all objects associated check if the backend declaration is inside a try block scroll until we hit the else block of tryexceptelse until we unindent add backend objects to the list at this stage we are in the typechecking part first grab the objects without a specific backend let s continue with backendspecific objects if the line is an if isbackendavailable we grab all objects associated check if the backend declaration is inside a try block scroll until we hit the else block of tryexceptelse until we unindent add backend objects to the list analyze the differences between importstructure objects and typechecking objects found in an init args importdictobjects dictstr liststr a dictionary mapping backend names none for the objects independent of any specific backend to list of imported objects typehintobjects dictstr liststr a dictionary mapping backend names none for the objects independent of any specific backend to list of imported objects returns liststr the list of errors corresponding to mismatches if one backend is missing from the other part of the init error early find all errors duplicate imports in any half missing imports in either part of the init check all inits in the transformers repo and raise an error if at least one does not define the same objects in both halves returns the list of transformers submodules ignore private modules ignore leftovers from branches empty folders apart from pycache check all submodules of transformers are properly registered in the main init error otherwise this is to make sure the transformers module imported is the one in the repo this contains all the base keys of the importstructure object defined in the init but if the user is missing some optional dependencies they may not have all of them thus we read the init to read all additions and potentiall re add them coding utf 8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that checks the custom inits of transformers are well defined transformers uses init files that delay the import of an object to when it s actually needed this is to avoid the main init importing all models which would make the line import transformers very slow when the user has all optional dependencies installed the inits with delayed imports have two halves one definining a dictionary _import_structure which maps modules to the name of the objects in each module and one in type_checking which looks like a normal init for type checkers the goal of this script is to check the objects defined in both halves are the same this also checks the main init properly references all submodules even if it doesn t import anything from them every submodule should be defined as a key of _import_structure with an empty list as value potentially or the submodule won t be importable use from the root of the repo with bash python utils check_inits py for a check that will error in case of inconsistencies used by make repo consistency there is no auto fix possible here sadly path is set with the intent you should run this script from the root of the repo matches is_xxx_available catches a one line _import_struct xxx catches a line with a key values pattern bla foo bar catches a line if not is_foo_available catches a line _import_struct bla append foo catches a line _import_struct bla extend foo bar or _import_struct bla foo bar catches a line with an object between quotes and a comma mymodel catches a line with objects between brackets only foo bar catches a line with from foo import bar bla boo catches a line with try catches a line with else find one or multiple backend in a code line of the init args line str a code line of the main init returns optional str if one or several backend is found returns it in the case of multiple backends the line contains if is_xxx_available and is_yyy_available returns all backends joined on _and_ so xxx_and_yyy for instance read an init_file and parse per backend the _import_structure objects defined and the type_checking objects defined args init_file str path to the init file to inspect returns optional tuple dict str list str dict str list str a tuple of two dictionaries mapping backends to list of imported objects one for the _import_structure part of the init and one for the type_checking part of the init returns none if the init is not a custom init get the to _import_structure definition if this is a traditional init just return first grab the objects without a specific backend in _import_structure if we have everything on a single line let s deal with it those are stored with the key none let s continue with backend specific objects in _import_structure if the line is an if not is_backend_available we grab all objects associated check if the backend declaration is inside a try block scroll until we hit the else block of try except else until we unindent add backend objects to the list at this stage we are in the type_checking part first grab the objects without a specific backend let s continue with backend specific objects if the line is an if is_backend_available we grab all objects associated check if the backend declaration is inside a try block scroll until we hit the else block of try except else until we unindent add backend objects to the list analyze the differences between _import_structure objects and type_checking objects found in an init args import_dict_objects dict str list str a dictionary mapping backend names none for the objects independent of any specific backend to list of imported objects type_hint_objects dict str list str a dictionary mapping backend names none for the objects independent of any specific backend to list of imported objects returns list str the list of errors corresponding to mismatches if one backend is missing from the other part of the init error early find all errors duplicate imports in any half missing imports in either part of the init check all inits in the transformers repo and raise an error if at least one does not define the same objects in both halves returns the list of transformers submodules ignore private modules ignore leftovers from branches empty folders apart from pycache check all submodules of transformers are properly registered in the main init error otherwise this is to make sure the transformers module imported is the one in the repo this contains all the base keys of the _import_structure object defined in the init but if the user is missing some optional dependencies they may not have all of them thus we read the init to read all additions and potentiall re add them
import collections import os import re from pathlib import Path from typing import Dict, List, Optional, Tuple PATH_TO_TRANSFORMERS = "src/transformers" _re_backend = re.compile(r"is\_([a-z_]*)_available()") _re_one_line_import_struct = re.compile(r"^_import_structure\s+=\s+\{([^\}]+)\}") _re_import_struct_key_value = re.compile(r'\s+"\S*":\s+\[([^\]]*)\]') _re_test_backend = re.compile(r"^\s*if\s+not\s+is\_[a-z_]*\_available\(\)") _re_import_struct_add_one = re.compile(r'^\s*_import_structure\["\S*"\]\.append\("(\S*)"\)') _re_import_struct_add_many = re.compile(r"^\s*_import_structure\[\S*\](?:\.extend\(|\s*=\s+)\[([^\]]*)\]") _re_quote_object = re.compile(r'^\s+"([^"]+)",') _re_between_brackets = re.compile(r"^\s+\[([^\]]+)\]") _re_import = re.compile(r"\s+from\s+\S*\s+import\s+([^\(\s].*)\n") _re_try = re.compile(r"^\s*try:") _re_else = re.compile(r"^\s*else:") def find_backend(line: str) -> Optional[str]: if _re_test_backend.search(line) is None: return None backends = [b[0] for b in _re_backend.findall(line)] backends.sort() return "_and_".join(backends) def parse_init(init_file) -> Optional[Tuple[Dict[str, List[str]], Dict[str, List[str]]]]: with open(init_file, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() line_index = 0 while line_index < len(lines) and not lines[line_index].startswith("_import_structure = {"): line_index += 1 if line_index >= len(lines): return None objects = [] while not lines[line_index].startswith("if TYPE_CHECKING") and find_backend(lines[line_index]) is None: line = lines[line_index] if _re_one_line_import_struct.search(line): content = _re_one_line_import_struct.search(line).groups()[0] imports = re.findall(r"\[([^\]]+)\]", content) for imp in imports: objects.extend([obj[1:-1] for obj in imp.split(", ")]) line_index += 1 continue single_line_import_search = _re_import_struct_key_value.search(line) if single_line_import_search is not None: imports = [obj[1:-1] for obj in single_line_import_search.groups()[0].split(", ") if len(obj) > 0] objects.extend(imports) elif line.startswith(" " * 8 + '"'): objects.append(line[9:-3]) line_index += 1 import_dict_objects = {"none": objects} while not lines[line_index].startswith("if TYPE_CHECKING"): backend = find_backend(lines[line_index]) if _re_try.search(lines[line_index - 1]) is None: backend = None if backend is not None: line_index += 1 while _re_else.search(lines[line_index]) is None: line_index += 1 line_index += 1 objects = [] while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 4): line = lines[line_index] if _re_import_struct_add_one.search(line) is not None: objects.append(_re_import_struct_add_one.search(line).groups()[0]) elif _re_import_struct_add_many.search(line) is not None: imports = _re_import_struct_add_many.search(line).groups()[0].split(", ") imports = [obj[1:-1] for obj in imports if len(obj) > 0] objects.extend(imports) elif _re_between_brackets.search(line) is not None: imports = _re_between_brackets.search(line).groups()[0].split(", ") imports = [obj[1:-1] for obj in imports if len(obj) > 0] objects.extend(imports) elif _re_quote_object.search(line) is not None: objects.append(_re_quote_object.search(line).groups()[0]) elif line.startswith(" " * 8 + '"'): objects.append(line[9:-3]) elif line.startswith(" " * 12 + '"'): objects.append(line[13:-3]) line_index += 1 import_dict_objects[backend] = objects else: line_index += 1 objects = [] while ( line_index < len(lines) and find_backend(lines[line_index]) is None and not lines[line_index].startswith("else") ): line = lines[line_index] single_line_import_search = _re_import.search(line) if single_line_import_search is not None: objects.extend(single_line_import_search.groups()[0].split(", ")) elif line.startswith(" " * 8): objects.append(line[8:-2]) line_index += 1 type_hint_objects = {"none": objects} while line_index < len(lines): backend = find_backend(lines[line_index]) if _re_try.search(lines[line_index - 1]) is None: backend = None if backend is not None: line_index += 1 while _re_else.search(lines[line_index]) is None: line_index += 1 line_index += 1 objects = [] while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 8): line = lines[line_index] single_line_import_search = _re_import.search(line) if single_line_import_search is not None: objects.extend(single_line_import_search.groups()[0].split(", ")) elif line.startswith(" " * 12): objects.append(line[12:-2]) line_index += 1 type_hint_objects[backend] = objects else: line_index += 1 return import_dict_objects, type_hint_objects def analyze_results(import_dict_objects: Dict[str, List[str]], type_hint_objects: Dict[str, List[str]]) -> List[str]: def find_duplicates(seq): return [k for k, v in collections.Counter(seq).items() if v > 1] if list(import_dict_objects.keys()) != list(type_hint_objects.keys()): return ["Both sides of the init do not have the same backends!"] errors = [] for key in import_dict_objects.keys(): duplicate_imports = find_duplicates(import_dict_objects[key]) if duplicate_imports: errors.append(f"Duplicate _import_structure definitions for: {duplicate_imports}") duplicate_type_hints = find_duplicates(type_hint_objects[key]) if duplicate_type_hints: errors.append(f"Duplicate TYPE_CHECKING objects for: {duplicate_type_hints}") if sorted(set(import_dict_objects[key])) != sorted(set(type_hint_objects[key])): name = "base imports" if key == "none" else f"{key} backend" errors.append(f"Differences for {name}:") for a in type_hint_objects[key]: if a not in import_dict_objects[key]: errors.append(f" {a} in TYPE_HINT but not in _import_structure.") for a in import_dict_objects[key]: if a not in type_hint_objects[key]: errors.append(f" {a} in _import_structure but not in TYPE_HINT.") return errors def check_all_inits(): failures = [] for root, _, files in os.walk(PATH_TO_TRANSFORMERS): if "__init__.py" in files: fname = os.path.join(root, "__init__.py") objects = parse_init(fname) if objects is not None: errors = analyze_results(*objects) if len(errors) > 0: errors[0] = f"Problem in {fname}, both halves do not define the same objects.\n{errors[0]}" failures.append("\n".join(errors)) if len(failures) > 0: raise ValueError("\n\n".join(failures)) def get_transformers_submodules() -> List[str]: submodules = [] for path, directories, files in os.walk(PATH_TO_TRANSFORMERS): for folder in directories: if folder.startswith("_"): directories.remove(folder) continue if len(list((Path(path) / folder).glob("*.py"))) == 0: continue short_path = str((Path(path) / folder).relative_to(PATH_TO_TRANSFORMERS)) submodule = short_path.replace(os.path.sep, ".") submodules.append(submodule) for fname in files: if fname == "__init__.py": continue short_path = str((Path(path) / fname).relative_to(PATH_TO_TRANSFORMERS)) submodule = short_path.replace(".py", "").replace(os.path.sep, ".") if len(submodule.split(".")) == 1: submodules.append(submodule) return submodules IGNORE_SUBMODULES = [ "convert_pytorch_checkpoint_to_tf2", "modeling_flax_pytorch_utils", "models.esm.openfold_utils", "modeling_attn_mask_utils", "safetensors_conversion", ] def check_submodules(): from transformers.utils import direct_transformers_import transformers = direct_transformers_import(PATH_TO_TRANSFORMERS) import_structure_keys = set(transformers._import_structure.keys()) with open(os.path.join(PATH_TO_TRANSFORMERS, "__init__.py"), "r") as f: init_content = f.read() import_structure_keys.update(set(re.findall(r"import_structure\[\"([^\"]*)\"\]", init_content))) module_not_registered = [ module for module in get_transformers_submodules() if module not in IGNORE_SUBMODULES and module not in import_structure_keys ] if len(module_not_registered) > 0: list_of_modules = "\n".join(f"- {module}" for module in module_not_registered) raise ValueError( "The following submodules are not properly registed in the main init of Transformers:\n" f"{list_of_modules}\n" "Make sure they appear somewhere in the keys of `_import_structure` with an empty list as value." ) if __name__ == "__main__": check_all_inits() check_submodules()
codingutf8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license todo deal with tfflax too a few tester classes don t have parent parameter in init todo deal this better coding utf 8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license todo deal with tf flax too a few tester classes don t have parent parameter in __init__ todo deal this better
import glob import os from get_test_info import get_tester_classes if __name__ == "__main__": failures = [] pattern = os.path.join("tests", "models", "**", "test_modeling_*.py") test_files = glob.glob(pattern) test_files = [ x for x in test_files if not (x.startswith("test_modeling_tf_") or x.startswith("test_modeling_flax_")) ] for test_file in test_files: tester_classes = get_tester_classes(test_file) for tester_class in tester_classes: try: tester = tester_class(parent=None) except Exception: continue if hasattr(tester, "get_config"): config = tester.get_config() for k, v in config.to_dict().items(): if isinstance(v, int): target = None if k in ["vocab_size"]: target = 100 elif k in ["max_position_embeddings"]: target = 128 elif k in ["hidden_size", "d_model"]: target = 40 elif k == ["num_layers", "num_hidden_layers", "num_encoder_layers", "num_decoder_layers"]: target = 5 if target is not None and v > target: failures.append( f"{tester_class.__name__} will produce a `config` of type `{config.__class__.__name__}`" f' with config["{k}"] = {v} which is too large for testing! Set its value to be smaller' f" than {target}." ) if len(failures) > 0: raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
save the result so we can report them on slack required parameters save the result so we can report them on slack required parameters
import argparse import json import subprocess def get_runner_status(target_runners, token): offline_runners = [] cmd = ( f'curl -H "Accept: application/vnd.github+json" -H "Authorization: Bearer {token}"' " https://api.github.com/repos/huggingface/transformers/actions/runners" ) output = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE) o = output.stdout.decode("utf-8") status = json.loads(o) runners = status["runners"] for runner in runners: if runner["name"] in target_runners: if runner["status"] == "offline": offline_runners.append(runner) with open("offline_runners.txt", "w") as fp: fp.write(json.dumps(offline_runners)) if len(offline_runners) > 0: failed = "\n".join([x["name"] for x in offline_runners]) raise ValueError(f"The following runners are offline:\n{failed}") if __name__ == "__main__": def list_str(values): return values.split(",") parser = argparse.ArgumentParser() parser.add_argument( "--target_runners", default=None, type=list_str, required=True, help="Comma-separated list of runners to check status.", ) parser.add_argument( "--token", default=None, type=str, required=True, help="A token that has actions:read permission." ) args = parser.parse_args() get_runner_status(args.target_runners, args.token)
codingutf8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that checks the big table in the file docssourceenindex md and potentially updates it use from the root of the repo with bash python utilscheckinits py for a check that will error in case of inconsistencies used by make repoconsistency to autofix issues run bash python utilscheckinits py fixandoverwrite which is used by make fixcopies all paths are set with the intent you should run this script from the root of the repo with the command python utilschecktable py find the text in filename between two prompts args filename str the file to search into startprompt str a string to look for at the start of the content searched endprompt str a string that will mark the end of the content to look for returns str the content between the prompts find the start prompt now go until the end prompt regexes that match tfflaxpt model names add here suffixes that are used to identify models separated by will match any tf or flax model too so need to be in an else branch after the two previous regexes this is to make sure the transformers module imported is the one in the repo split a camelcased name into words args identifier str the camelcased name to parse returns liststr the list of words in the identifier as seprated by capital letters example py camelcasesplitcamelcasedclass camel cased class regex thanks to https stackoverflow comquestions29916065howtodocamelcasesplitinpython utility that will add spaces on the left and right of a text to make it centered for a given width args text str the text to center width int the desired length of the result returns str a text of length width with the original text in the middle generates an uptodate model table from the content of the auto modules dictionary model names to config dictionaries flagging if each model prefix has a backend in pttfflax let s lookup through all transformers object once try again after removing the last word in the name let s build that table model name to doc link mapping update mapping with special model names maskformerswin and timmbackbone are backbones and so not meant to be loaded and used on their own instead they define architectures which can be loaded using the autobackbone api we ll need widths to properly display everything in the center 2 is to leave one extra space on each side build the table per se use format to centeraligned table cell texts check the model table in the index md is consistent with the state of the lib and potentially fix it args overwrite bool optional defaults to false whether or not to overwrite the table when it s not up to date coding utf 8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that checks the big table in the file docs source en index md and potentially updates it use from the root of the repo with bash python utils check_inits py for a check that will error in case of inconsistencies used by make repo consistency to auto fix issues run bash python utils check_inits py fix_and_overwrite which is used by make fix copies all paths are set with the intent you should run this script from the root of the repo with the command python utils check_table py find the text in filename between two prompts args filename str the file to search into start_prompt str a string to look for at the start of the content searched end_prompt str a string that will mark the end of the content to look for returns str the content between the prompts find the start prompt now go until the end prompt regexes that match tf flax pt model names add here suffixes that are used to identify models separated by will match any tf or flax model too so need to be in an else branch after the two previous regexes this is to make sure the transformers module imported is the one in the repo split a camel cased name into words args identifier str the camel cased name to parse returns list str the list of words in the identifier as seprated by capital letters example py camel_case_split camelcasedclass camel cased class regex thanks to https stackoverflow com questions 29916065 how to do camelcase split in python utility that will add spaces on the left and right of a text to make it centered for a given width args text str the text to center width int the desired length of the result returns str a text of length width with the original text in the middle generates an up to date model table from the content of the auto modules dictionary model names to config dictionaries flagging if each model prefix has a backend in pt tf flax let s lookup through all transformers object once try again after removing the last word in the name let s build that table model name to doc link mapping update mapping with special model names maskformerswin and timmbackbone are backbones and so not meant to be loaded and used on their own instead they define architectures which can be loaded using the autobackbone api we ll need widths to properly display everything in the center 2 is to leave one extra space on each side build the table per se use format to center aligned table cell texts check the model table in the index md is consistent with the state of the lib and potentially fix it args overwrite bool optional defaults to false whether or not to overwrite the table when it s not up to date
import argparse import collections import os import re from typing import List from transformers.utils import direct_transformers_import TRANSFORMERS_PATH = "src/transformers" PATH_TO_DOCS = "docs/source/en" REPO_PATH = "." def _find_text_in_file(filename: str, start_prompt: str, end_prompt: str) -> str: with open(filename, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() start_index = 0 while not lines[start_index].startswith(start_prompt): start_index += 1 start_index += 1 end_index = start_index while not lines[end_index].startswith(end_prompt): end_index += 1 end_index -= 1 while len(lines[start_index]) <= 1: start_index += 1 while len(lines[end_index]) <= 1: end_index -= 1 end_index += 1 return "".join(lines[start_index:end_index]), start_index, end_index, lines _re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") _re_flax_models = re.compile(r"Flax(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") _re_pt_models = re.compile(r"(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") transformers_module = direct_transformers_import(TRANSFORMERS_PATH) def camel_case_split(identifier: str) -> List[str]: matches = re.finditer(".+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)", identifier) return [m.group(0) for m in matches] def _center_text(text: str, width: int) -> str: text_length = 2 if text == "✅" or text == "❌" else len(text) left_indent = (width - text_length) // 2 right_indent = width - text_length - left_indent return " " * left_indent + text + " " * right_indent SPECIAL_MODEL_NAME_LINK_MAPPING = { "Data2VecAudio": "[Data2VecAudio](model_doc/data2vec)", "Data2VecText": "[Data2VecText](model_doc/data2vec)", "Data2VecVision": "[Data2VecVision](model_doc/data2vec)", "DonutSwin": "[DonutSwin](model_doc/donut)", } MODEL_NAMES_WITH_SAME_CONFIG = { "BARThez": "BART", "BARTpho": "BART", "BertJapanese": "BERT", "BERTweet": "BERT", "BORT": "BERT", "ByT5": "T5", "CPM": "OpenAI GPT-2", "DePlot": "Pix2Struct", "DialoGPT": "OpenAI GPT-2", "DiT": "BEiT", "FLAN-T5": "T5", "FLAN-UL2": "T5", "HerBERT": "BERT", "LayoutXLM": "LayoutLMv2", "Llama2": "LLaMA", "MADLAD-400": "T5", "MatCha": "Pix2Struct", "mBART-50": "mBART", "Megatron-GPT2": "OpenAI GPT-2", "mLUKE": "LUKE", "MMS": "Wav2Vec2", "NLLB": "M2M100", "PhoBERT": "BERT", "T5v1.1": "T5", "TAPEX": "BART", "UL2": "T5", "Wav2Vec2Phoneme": "Wav2Vec2", "XLM-V": "XLM-RoBERTa", "XLS-R": "Wav2Vec2", "XLSR-Wav2Vec2": "Wav2Vec2", } def get_model_table_from_auto_modules() -> str: config_maping_names = transformers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES model_name_to_config = { name: config_maping_names[code] for code, name in transformers_module.MODEL_NAMES_MAPPING.items() if code in config_maping_names } model_name_to_prefix = {name: config.replace("Config", "") for name, config in model_name_to_config.items()} pt_models = collections.defaultdict(bool) tf_models = collections.defaultdict(bool) flax_models = collections.defaultdict(bool) for attr_name in dir(transformers_module): lookup_dict = None if _re_tf_models.match(attr_name) is not None: lookup_dict = tf_models attr_name = _re_tf_models.match(attr_name).groups()[0] elif _re_flax_models.match(attr_name) is not None: lookup_dict = flax_models attr_name = _re_flax_models.match(attr_name).groups()[0] elif _re_pt_models.match(attr_name) is not None: lookup_dict = pt_models attr_name = _re_pt_models.match(attr_name).groups()[0] if lookup_dict is not None: while len(attr_name) > 0: if attr_name in model_name_to_prefix.values(): lookup_dict[attr_name] = True break attr_name = "".join(camel_case_split(attr_name)[:-1]) model_names = list(model_name_to_config.keys()) + list(MODEL_NAMES_WITH_SAME_CONFIG.keys()) model_names_mapping = transformers_module.models.auto.configuration_auto.MODEL_NAMES_MAPPING model_name_to_link_mapping = {value: f"[{value}](model_doc/{key})" for key, value in model_names_mapping.items()} model_name_to_link_mapping = { k: SPECIAL_MODEL_NAME_LINK_MAPPING[k] if k in SPECIAL_MODEL_NAME_LINK_MAPPING else v for k, v in model_name_to_link_mapping.items() } names_to_exclude = ["MaskFormerSwin", "TimmBackbone", "Speech2Text2"] model_names = [name for name in model_names if name not in names_to_exclude] model_names.sort(key=str.lower) columns = ["Model", "PyTorch support", "TensorFlow support", "Flax Support"] widths = [len(c) + 2 for c in columns] widths[0] = max([len(doc_link) for doc_link in model_name_to_link_mapping.values()]) + 2 table = "|" + "|".join([_center_text(c, w) for c, w in zip(columns, widths)]) + "|\n" table += "|" + "|".join([":" + "-" * (w - 2) + ":" for w in widths]) + "|\n" check = {True: "✅", False: "❌"} for name in model_names: if name in MODEL_NAMES_WITH_SAME_CONFIG.keys(): prefix = model_name_to_prefix[MODEL_NAMES_WITH_SAME_CONFIG[name]] else: prefix = model_name_to_prefix[name] line = [ model_name_to_link_mapping[name], check[pt_models[prefix]], check[tf_models[prefix]], check[flax_models[prefix]], ] table += "|" + "|".join([_center_text(l, w) for l, w in zip(line, widths)]) + "|\n" return table def check_model_table(overwrite=False): current_table, start_index, end_index, lines = _find_text_in_file( filename=os.path.join(PATH_TO_DOCS, "index.md"), start_prompt="<!--This table is updated automatically from the auto modules", end_prompt="<!-- End table-->", ) new_table = get_model_table_from_auto_modules() if current_table != new_table: if overwrite: with open(os.path.join(PATH_TO_DOCS, "index.md"), "w", encoding="utf-8", newline="\n") as f: f.writelines(lines[:start_index] + [new_table] + lines[end_index:]) else: raise ValueError( "The model table in the `index.md` has not been updated. Run `make fix-copies` to fix this." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() check_model_table(args.fix_and_overwrite)
codingutf8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that checks the list of models in the tips in the taskspecific pages of the doc is up to date and potentially fixes it use from the root of the repo with bash python utilschecktaskguides py for a check that will error in case of inconsistencies used by make repoconsistency to autofix issues run bash python utilschecktaskguides py fixandoverwrite which is used by make fixcopies all paths are set with the intent you should run this script from the root of the repo with the command python utilschecktaskguides py find the text in filename between two prompts args filename str the file to search into startprompt str a string to look for at the start of the content searched endprompt str a string that will mark the end of the content to look for returns str the content between the prompts find the start prompt now go until the end prompt this is to make sure the transformers module imported is the one in the repo map between a task guide and the corresponding auto class this list contains model types used in some task guides that are not in configmappingnames therefore not in any modelmappingnames or any modelforxxxmappingnames return the list of models supporting a given task args taskguide str the name of the task guide to check returns str the list of models supporting this task as links to their respective doc pages separated by commas for a given task guide checks the model list in the generated tip for consistency with the state of the lib and updates it if needed args taskguide str the name of the task guide to check overwrite bool optional defaults to false whether or not to overwrite the table when it s not up to date coding utf 8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that checks the list of models in the tips in the task specific pages of the doc is up to date and potentially fixes it use from the root of the repo with bash python utils check_task_guides py for a check that will error in case of inconsistencies used by make repo consistency to auto fix issues run bash python utils check_task_guides py fix_and_overwrite which is used by make fix copies all paths are set with the intent you should run this script from the root of the repo with the command python utils check_task_guides py find the text in filename between two prompts args filename str the file to search into start_prompt str a string to look for at the start of the content searched end_prompt str a string that will mark the end of the content to look for returns str the content between the prompts find the start prompt now go until the end prompt this is to make sure the transformers module imported is the one in the repo map between a task guide and the corresponding auto class this list contains model types used in some task guides that are not in config_mapping_names therefore not in any model_mapping_names or any model_for_xxx_mapping_names return the list of models supporting a given task args task_guide str the name of the task guide to check returns str the list of models supporting this task as links to their respective doc pages separated by commas for a given task guide checks the model list in the generated tip for consistency with the state of the lib and updates it if needed args task_guide str the name of the task guide to check overwrite bool optional defaults to false whether or not to overwrite the table when it s not up to date
import argparse import os from transformers.utils import direct_transformers_import TRANSFORMERS_PATH = "src/transformers" PATH_TO_TASK_GUIDES = "docs/source/en/tasks" def _find_text_in_file(filename: str, start_prompt: str, end_prompt: str) -> str: with open(filename, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() start_index = 0 while not lines[start_index].startswith(start_prompt): start_index += 1 start_index += 1 end_index = start_index while not lines[end_index].startswith(end_prompt): end_index += 1 end_index -= 1 while len(lines[start_index]) <= 1: start_index += 1 while len(lines[end_index]) <= 1: end_index -= 1 end_index += 1 return "".join(lines[start_index:end_index]), start_index, end_index, lines transformers_module = direct_transformers_import(TRANSFORMERS_PATH) TASK_GUIDE_TO_MODELS = { "asr.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_CTC_MAPPING_NAMES, "audio_classification.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES, "language_modeling.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_CAUSAL_LM_MAPPING_NAMES, "image_classification.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES, "masked_language_modeling.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_MASKED_LM_MAPPING_NAMES, "multiple_choice.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES, "object_detection.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES, "question_answering.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES, "semantic_segmentation.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_SEMANTIC_SEGMENTATION_MAPPING_NAMES, "sequence_classification.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES, "summarization.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES, "token_classification.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES, "translation.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES, "video_classification.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING_NAMES, "document_question_answering.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES, "monocular_depth_estimation.md": transformers_module.models.auto.modeling_auto.MODEL_FOR_DEPTH_ESTIMATION_MAPPING_NAMES, } SPECIAL_TASK_GUIDE_TO_MODEL_TYPES = { "summarization.md": ("nllb",), "translation.md": ("nllb",), } def get_model_list_for_task(task_guide: str) -> str: model_maping_names = TASK_GUIDE_TO_MODELS[task_guide] special_model_types = SPECIAL_TASK_GUIDE_TO_MODEL_TYPES.get(task_guide, set()) model_names = { code: name for code, name in transformers_module.MODEL_NAMES_MAPPING.items() if (code in model_maping_names or code in special_model_types) } return ", ".join([f"[{name}](../model_doc/{code})" for code, name in model_names.items()]) + "\n" def check_model_list_for_task(task_guide: str, overwrite: bool = False): current_list, start_index, end_index, lines = _find_text_in_file( filename=os.path.join(PATH_TO_TASK_GUIDES, task_guide), start_prompt="<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->", end_prompt="<!--End of the generated tip-->", ) new_list = get_model_list_for_task(task_guide) if current_list != new_list: if overwrite: with open(os.path.join(PATH_TO_TASK_GUIDES, task_guide), "w", encoding="utf-8", newline="\n") as f: f.writelines(lines[:start_index] + [new_list] + lines[end_index:]) else: raise ValueError( f"The list of models that can be used in the {task_guide} guide needs an update. Run `make fix-copies`" " to fix this." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--fix_and_overwrite", action="store_true", help="Whether to fix inconsistencies.") args = parser.parse_args() for task_guide in TASK_GUIDE_TO_MODELS.keys(): check_model_list_for_task(task_guide, args.fix_and_overwrite)
codingutf8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license all paths are set with the intent you should run this script from the root of the repo with the command python utilscheckcopies py internal tensorflow ops that can be safely ignored mostly specific to a saved model iterate over every metagraph in case there is more than one a saved model can contain multiple graphs add operations in the graph definition go through the functions in the graph definition add operations in each function convert to list sorted if you want coding utf 8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license all paths are set with the intent you should run this script from the root of the repo with the command python utils check_copies py internal tensorflow ops that can be safely ignored mostly specific to a saved model iterate over every metagraph in case there is more than one a saved model can contain multiple graphs add operations in the graph definition go through the functions in the graph definition add operations in each function convert to list sorted if you want
import argparse import json import os from tensorflow.core.protobuf.saved_model_pb2 import SavedModel REPO_PATH = "." INTERNAL_OPS = [ "Assert", "AssignVariableOp", "EmptyTensorList", "MergeV2Checkpoints", "ReadVariableOp", "ResourceGather", "RestoreV2", "SaveV2", "ShardedFilename", "StatefulPartitionedCall", "StaticRegexFullMatch", "VarHandleOp", ] def onnx_compliancy(saved_model_path, strict, opset): saved_model = SavedModel() onnx_ops = [] with open(os.path.join(REPO_PATH, "utils", "tf_ops", "onnx.json")) as f: onnx_opsets = json.load(f)["opsets"] for i in range(1, opset + 1): onnx_ops.extend(onnx_opsets[str(i)]) with open(saved_model_path, "rb") as f: saved_model.ParseFromString(f.read()) model_op_names = set() for meta_graph in saved_model.meta_graphs: model_op_names.update(node.op for node in meta_graph.graph_def.node) for func in meta_graph.graph_def.library.function: model_op_names.update(node.op for node in func.node_def) model_op_names = sorted(model_op_names) incompatible_ops = [] for op in model_op_names: if op not in onnx_ops and op not in INTERNAL_OPS: incompatible_ops.append(op) if strict and len(incompatible_ops) > 0: raise Exception(f"Found the following incompatible ops for the opset {opset}:\n" + incompatible_ops) elif len(incompatible_ops) > 0: print(f"Found the following incompatible ops for the opset {opset}:") print(*incompatible_ops, sep="\n") else: print(f"The saved model {saved_model_path} can properly be converted with ONNX.") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--saved_model_path", help="Path of the saved model to check (the .pb file).") parser.add_argument( "--opset", default=12, type=int, help="The ONNX opset against which the model has to be tested." ) parser.add_argument( "--framework", choices=["onnx"], default="onnx", help="Frameworks against which to test the saved model." ) parser.add_argument( "--strict", action="store_true", help="Whether make the checking strict (raise errors) or not (raise warnings)" ) args = parser.parse_args() if args.framework == "onnx": onnx_compliancy(args.saved_model_path, args.strict, args.opset)
codingutf8 2022 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license make sure tokenizer plays nice with multiprocessing this list contains the model architectures for which a tiny version could not be created avoid to add new architectures here unless we have verified carefully that it s almost impossible to create them one such case is no model tester class is implemented for a model type like mt5 because its architecture is identical to another one mt5 is based on t5 but trained on different datasets or with different techniques return a tuple of processors for configclass we use tuple here to include potentially both slow fast tokenizers to make a uniform return type check first if a model has processormixin otherwise check if it has tokenizers andor an image processor or a feature extractor remark some configurations have no processor at all for example generic composite models like encoderdecodermodel is used for any compatible text models also decisiontransformer doesn t require any processor we might get none for some tokenizers remove them here return a tuple of all possible architectures attributed to a configuration class configclass for example bertconfig bertmodel bertformaskedlm bertforquestionanswering a model architecture could appear in several mappings for example bartforconditionalgeneration is in modelforpretrainingmappingnames modelwithlmheadmappingnames modelformaskedlmmappingnames modelforseqtoseqcausallmmappingnames we avoid the duplication get the config class from a processor class some configmodel classes use tokenizersfeatureextractors from other models for example gptj uses gpt2tokenizer if no checkpoint is found for a config class or a checkpoint is found without necessary files to create the processor for processorclass we get the config class that corresponds to processorclass and use it to find a checkpoint in order to create the processor wav2vec2ctctokenizer wav2vec2config find the new configuration class create a processor for processorclass if a processor is not able to be built with the original arguments this method tries to change the arguments and call itself recursively by inferring a new configclass or a new processorclass from another one in order to find a checkpoint containing the necessary files to build a processor the processor is not saved here instead it will be saved in convertprocessors after further changes in convertprocessors for each model architecture a copy will be created and saved along the built model currently this solely uses the docstring in the source file of configclass to find a checkpoint try to get the checkpoint from the config class for processorclass this helps cases like xclipconfig and videomaefeatureextractor to find a checkpoint from videomaeconfig try to get a new processor class from checkpoint this is helpful for a checkpoint without necessary file to load processor while processorclass is an auto class for example sew has wav2vec2processor in processormappingnames its tokenizerclass is autotokenizer and the checkpoint https huggingface coasappsewtiny100k has no tokenizer file but we can get tokenizerclass wav2vec2ctctokenizer from the config file the new processor class won t be able to load from checkpoint but it helps this recursive method to find a way to build a processor if tokenizerclass is not specified in config let s use config to get the process class via auto mappings but only allow the tokenizer mapping being used this is to make wav2vec2conformer build used to avoid infinite recursion between a pair of fastslow tokenizer types let s use fast tokenizer if there is any try to build each component tokenizer feature extractor of a processormixin this could be a tuple for tokenizers for example clipprocessor has featureextractorclass clipfeatureextractor tokenizerclass cliptokenizer cliptokenizerfast try to build a processormixin so we can return a single value checkpoint might lack some files to load a processor for example facebookhubertbasels960 has no tokenizer file to load wav2vec2ctctokenizer in this case we try to build a processor with the configuration class for example wav2vec2config corresponding to processorclass try to create an image processor or a feature extractor without any checkpoint validation retrieve a tiny configuration from configclass using each model s modeltester args configclass subclass of pretrainedconfig returns an instance of configclass with tiny hyperparameters for model type like data2vecvision and donutswin we can t get the configmodel file name directly via modeltype as it would be sth like configurationdata2vecvision py a simple way is to use inspect getsourcefileconfigclass the modeling file name without prefix modeling and postfix py find the model tester class sort with the length of the class names first then the alphabetical order this is to avoid t5encoderonlymodeltest is used instead of t5modeltest which has isencoderdecoderfalse and causes some pipeline tests failing also failures in optimum ci todo more fine grained control of the desired tester class cliplike models have textmodeltester and visionmodeltester and we need to pass vocabsize to textmodeltester via textkwargs the same trick is also necessary for flava parent is an instance of unittest testcase but we don t need it here poolformer has no getconfig defined furthermore it s better to use prepareconfigandinputs even if getconfig is defined since there might be some extra changes in prepareconfigandinputs make sure this is long enough some model tester has 20 for this attr to pass textgeneration pipeline tests make sure it at least runs speech2textmodel specific change a processor to work with smaller inputs for tokenizers we try to reduce their vocabulary size for feature extractor we use smaller image size or change other attributes using the values from tinyconfig see convertfeatureextractor this method should not fail we catch the errors and put them in resultwarnings with descriptive messages set tokenizers to none if the fastslow tokenizers have different values for vocabsize or length if keepfasttokenizertrue the fast tokenizer will be kept sanity check 1 fast and slow tokenizers should be compatible vocabsize sanity check 2 fast and slow tokenizers should be compatible length currently we only have these 2 possibilities check the built processors have the unique type if the original fastslow tokenizers don t correspond keep only the fast tokenizer this doesn t necessarily imply the fastslow tokenizers in a single hub repo has issues it s more of an issue in buildprocessor which tries to get a checkpoint with as much effort as possible for yosomodel which uses alberttokenizerfast its real hub checkpoint doesn t contain valid files to load the slower tokenizer alberttokenizer and it ends up finding the canonical checkpoint of albertmodel which has different vocabulary todo try to improve buildprocessor s definition andor usage to avoid the above situation in the first place wav2vec2forctc byt5tokenizer etc all are already small enough and have no fast version that can be retrained if fasttokenizer exists slowtokenizer should correspond to it make sure the fast tokenizer can be saved we don t save it to outputfolder at this moment only at the end of this function let s just keep the fast version if the possibly converted fastslow tokenizers don t correspond set them to none and use the original tokenizers if there is any conversion failed we keep the original tokenizers let s use the original version at the end originalfasttokenizer and originalslowtokenizer make sure the fast tokenizer can be saved we don t save it to outputfolder at this moment only at the end of this function make sure the slow tokenizer can be saved we don t save it to outputfolder at this moment only at the end of this function update feature extractors using the tiny config get frameworkagnostic architecture name used to save all pttfflax models into the same directory archname modelarch name if archname startswithtf archname archname2 elif archname startswithflax archname archname4 return os path joinoutputdir archname def buildmodelmodelarch tinyconfig outputdir checkpointdir getcheckpointdiroutputdir modelarch processoroutputdir os path joinoutputdir processors copy the same set of processors for a model type to the model arch specific folder if os path isdirprocessoroutputdir shutil copytreeprocessoroutputdir checkpointdir dirsexistoktrue tinyconfig copy deepcopytinyconfig if anymodelarch name endswithx for x in forcausallm lmheadmodel tinyconfig isencoderdecoder false tinyconfig isdecoder true model modelarchconfigtinyconfig model savepretrainedcheckpointdir model frompretrainedcheckpointdir return model def fillresultwitherrorresult error trace modelstocreate upload the tiny models archname modeldir splitos path sep1 reponame ftinyrandomarchname repoid forganizationreponame repoexist false error none try createreporepoidrepoid existokfalse repotypemodel tokentoken except exception as e error e if you already created in stre error none logger warningremote repository exists and will be cloned repoexist true try createreporepoidrepoid existoktrue repotypemodel tokentoken except exception as e error e if error is not none raise error with tempfile temporarydirectory as tmpdir repo repositorylocaldirtmpdir clonefromrepoid tokentoken repo gitpull shutil copytreemodeldir tmpdir dirsexistoktrue if repoexist open a pr on the existing hub repo hubprurl uploadfolder folderpathmodeldir repoidrepoid repotypemodel commitmessagefupdate tiny models for archname commitdescriptionfupload tiny models for archname createprtrue tokentoken logger warningfpr open in hubprurl todo we need this information else push to hub repo directly repo gitaddautolfstracktrue repo gitcommitfupload tiny models for archname repo gitpushblockingtrue this prints a progress bar with the upload logger warningftiny models archname pushed to repoid def buildcompositemodelsconfigclass outputdir import tempfile from transformers import bertconfig bertlmheadmodel bertmodel berttokenizer berttokenizerfast encoderdecodermodel gpt2config gpt2lmheadmodel gpt2tokenizer gpt2tokenizerfast speechencoderdecodermodel tfencoderdecodermodel tfvisionencoderdecodermodel tfvisiontextdualencodermodel visionencoderdecodermodel visiontextdualencodermodel vitconfig vitfeatureextractor vitmodel wav2vec2config wav2vec2model wav2vec2processor these will be removed at the end if they are empty result error none warnings if configclass modeltype encoderdecoder encoderconfigclass bertconfig decoderconfigclass bertconfig encoderprocessor berttokenizerfast berttokenizer decoderprocessor berttokenizerfast berttokenizer encoderclass bertmodel decoderclass bertlmheadmodel modelclass encoderdecodermodel tfmodelclass tfencoderdecodermodel elif configclass modeltype visionencoderdecoder encoderconfigclass vitconfig decoderconfigclass gpt2config encoderprocessor vitfeatureextractor decoderprocessor gpt2tokenizerfast gpt2tokenizer encoderclass vitmodel decoderclass gpt2lmheadmodel modelclass visionencoderdecodermodel tfmodelclass tfvisionencoderdecodermodel elif configclass modeltype speechencoderdecoder encoderconfigclass wav2vec2config decoderconfigclass bertconfig encoderprocessor wav2vec2processor decoderprocessor berttokenizerfast berttokenizer encoderclass wav2vec2model decoderclass bertlmheadmodel modelclass speechencoderdecodermodel tfmodelclass none elif configclass modeltype visiontextdualencoder not encoderdecoder but encoderencoder we just keep the same name as above to make code easier encoderconfigclass vitconfig decoderconfigclass bertconfig encoderprocessor vitfeatureextractor decoderprocessor berttokenizerfast berttokenizer encoderclass vitmodel decoderclass bertmodel modelclass visiontextdualencodermodel tfmodelclass tfvisiontextdualencodermodel with tempfile temporarydirectory as tmpdir try build encoder modelstocreate processor encoderprocessor pytorch encoderclass tensorflow encoderoutputdir os path jointmpdir encoder buildencoderconfigclass modelstocreate encoderoutputdir build decoder modelstocreate processor decoderprocessor pytorch decoderclass tensorflow decoderoutputdir os path jointmpdir decoder builddecoderconfigclass modelstocreate decoderoutputdir build encoderdecoder encoderpath os path joinencoderoutputdir encoderclass name decoderpath os path joindecoderoutputdir decoderclass name if configclass modeltype visiontextdualencoder specify these explicitly for encoderdecoder like models but not for visiontextdualencoder as it has no decoder decoderconfig decoderconfigclass frompretraineddecoderpath decoderconfig isdecoder true decoderconfig addcrossattention true model modelclass fromencoderdecoderpretrained encoderpath decoderpath decoderconfigdecoderconfig elif configclass modeltype visiontextdualencoder model modelclass fromvisiontextpretrainedencoderpath decoderpath modelpath os path join outputdir fmodelclass nameencoderconfigclass modeltypedecoderconfigclass modeltype model savepretrainedmodelpath if tfmodelclass is not none model tfmodelclass frompretrainedmodelpath model savepretrainedmodelpath copy the processors encoderprocessorpath os path joinencoderoutputdir processors decoderprocessorpath os path joindecoderoutputdir processors if os path isdirencoderprocessorpath shutil copytreeencoderprocessorpath modelpath dirsexistoktrue if os path isdirdecoderprocessorpath shutil copytreedecoderprocessorpath modelpath dirsexistoktrue fill result resultprocessor x name x name for x in encoderprocessor decoderprocessor resultpytorch modelclass name model modelclass name checkpoint modelpath resulttensorflow if tfmodelclass is not none resulttensorflow tfmodelclass name model tfmodelclass name checkpoint modelpath except exception resulterror ffailed to build models for configclass name traceback formatexc if not resulterror del resulterror if not resultwarnings del resultwarnings return result def gettokenidfromtokenizertokenidname tokenizer originaltokenid tokenid originaltokenid if not tokenidname endswithtokenid raise valueerrorftokenidname is tokenidname which doesn t end with tokenid token getattrtokenizer tokenidname replacetokenid token none if token is not none if isinstancetokenizer pretrainedtokenizerfast tokenid tokenizer converttokentoidwithaddedvoctoken else tokenid tokenizer converttokentoidtoken return tokenid def getconfigoverridesconfigclass processors bark configuration is too special let s just not handle this for now if configclass name barkconfig return configoverrides check if there is any tokenizer prefer fast version if any tokenizer none for processor in processors if isinstanceprocessor pretrainedtokenizerfast tokenizer processor break elif isinstanceprocessor pretrainedtokenizer tokenizer processor if tokenizer is none return configoverrides get some properties of the already converted tokenizer smaller vocab size special token ids etc we use lentokenizer instead of tokenizer vocabsize to avoid potential issues for tokenizers with nonempty addedtokensencoder one example is the debertav2tokenizer where the mask token is the extra token vocabsize lentokenizer the original checkpoint has length 35998 but it doesn t have ids 30400 and 30514 but instead 35998 and 35999 if configclass name gptsanjapaneseconfig vocabsize 2 configoverridesvocabsize vocabsize used to create a new model tester with tokenizer vocabsize in order to get the updated special token ids modeltesterkwargs vocabsize vocabsize fsmtmodeltester accepts srcvocabsize and tgtvocabsize but not vocabsize if configclass name fsmtconfig del modeltesterkwargsvocabsize modeltesterkwargssrcvocabsize tokenizer srcvocabsize modeltesterkwargstgtvocabsize tokenizer tgtvocabsize tinyconfig gettinyconfigconfigclass modeltesterkwargs handle the possibility of textconfig inside tinyconfig for cliplike models owlvit groupvit etc if hasattrtinyconfig textconfig tinyconfig tinyconfig textconfig collect values of some special token ids for attr in dirtinyconfig if attr endswithtokenid tokenid getattrtinyconfig attr if tokenid is not none using the token id values from tokenizer instead of from tinyconfig tokenid gettokenidfromtokenizerattr tokenizer originaltokenidtokenid configoverridesattr tokenid if configclass name fsmtconfig configoverridessrcvocabsize tokenizer srcvocabsize configoverridestgtvocabsize tokenizer tgtvocabsize fsmtconfig has decoderconfig as decoder attribute configoverridesdecoder configurationfsmt decoderconfig vocabsizetokenizer tgtvocabsize bostokenidconfigoverrideseostokenid return configoverrides def buildconfigclass modelstocreate outputdir if datatrainingds is none or datatestingds is none ds loaddatasetwikitext wikitext2rawv1 datatrainingds dstrain datatestingds dstest if configclass modeltype in encoderdecoder visionencoderdecoder speechencoderdecoder visiontextdualencoder return buildcompositemodelsconfigclass outputdir result k for k in modelstocreate these will be removed at the end if they are empty resulterror none resultwarnings build processors processorclasses modelstocreateprocessor if lenprocessorclasses 0 error fno processor class could be found in configclass name fillresultwitherrorresult error none modelstocreate logger errorresulterror0 return result for processorclass in processorclasses try processor buildprocessorconfigclass processorclass allownocheckpointtrue if processor is not none resultprocessorprocessorclass processor except exception error ffailed to build processor for processorclass name trace traceback formatexc fillresultwitherrorresult error trace modelstocreate logger errorresulterror0 return result if lenresultprocessor 0 error fno processor could be built for configclass name fillresultwitherrorresult error none modelstocreate logger errorresulterror0 return result try tinyconfig gettinyconfigconfigclass except exception as e error ffailed to get tiny config for configclass name e trace traceback formatexc fillresultwitherrorresult error trace modelstocreate logger errorresulterror0 return result convert the processors reduce vocabulary size smaller image size etc processors listresultprocessor values processoroutputfolder os path joinoutputdir processors try processors convertprocessorsprocessors tinyconfig processoroutputfolder result except exception error failed to convert the processors trace traceback formatexc resultwarnings appenderror trace if lenprocessors 0 error fno processor is returned by convertprocessors for configclass name fillresultwitherrorresult error none modelstocreate logger errorresulterror0 return result try configoverrides getconfigoverridesconfigclass processors except exception as e error ffailure occurs while calling getconfigoverrides e trace traceback formatexc fillresultwitherrorresult error trace modelstocreate logger errorresulterror0 return result just for us to see this easily in the report if vocabsize in configoverrides resultvocabsize configoverridesvocabsize update attributes that vocabsize involves for k v in configoverrides items if hasattrtinyconfig k setattrtinyconfig k v so far we only have to deal with textconfig as configoverrides contains textrelated attributes only fuyuconfig saves data under both fuyuconfig and its textconfig this is not good but let s just update every involved fields to avoid potential failure if hasattrtinyconfig textconfig and tinyconfig textconfig is not none and hasattrtinyconfig textconfig k setattrtinyconfig textconfig k v if textconfigdict exists we need to update its value here too in order to make savepretrained frompretrained work if hasattrtinyconfig textconfigdict tinyconfig textconfigdictk v if resultwarnings logger warningresultwarnings00 update resultprocessor resultprocessor typep name p class name for p in processors for pytorcharch in modelstocreatepytorch resultpytorchpytorcharch name error none try model buildmodelpytorcharch tinyconfig outputdiroutputdir except exception as e model none error ffailed to create the pytorch model for pytorcharch e trace traceback formatexc resultpytorchpytorcharch namemodel model class name if model is not none else none resultpytorchpytorcharch namecheckpoint getcheckpointdiroutputdir pytorcharch if model is not none else none if error is not none resultpytorchpytorcharch nameerror error trace logger errorfpytorcharch name error for tensorflowarch in modelstocreatetensorflow make pttf weights compatible ptarchname tensorflowarch name2 remove tf ptarch getattrtransformersmodule ptarchname resulttensorflowtensorflowarch name error none if ptarch name in resultpytorch and resultpytorchptarch namecheckpoint is not none ckpt getcheckpointdiroutputdir ptarch use the same weights from pytorch try model tensorflowarch frompretrainedckpt model savepretrainedckpt except exception as e conversion may fail let s not create a model with different weights to avoid confusion for now model none error ffailed to convert the pytorch model to the tensorflow model for ptarch e trace traceback formatexc else try model buildmodeltensorflowarch tinyconfig outputdiroutputdir except exception as e model none error ffailed to create the tensorflow model for tensorflowarch e trace traceback formatexc resulttensorflowtensorflowarch namemodel model class name if model is not none else none resulttensorflowtensorflowarch namecheckpoint getcheckpointdiroutputdir tensorflowarch if model is not none else none if error is not none resulttensorflowtensorflowarch nameerror error trace logger errorftensorflowarch name error if not resulterror del resulterror if not resultwarnings del resultwarnings return result def buildtinymodelsummaryresults organizationnone tokennone tinymodelsummary for configname in results processors key for key value in resultsconfignameprocessor items tokenizerclasses sortedx for x in processors if x endswithtokenizerfast or x endswithtokenizer processorclasses sortedx for x in processors if x not in tokenizerclasses for framework in frameworks if framework not in resultsconfigname continue for archname in resultsconfignameframework modelclasses archname basearchname archname2 if archname startswithtf else archname tiny model is not created for archname if resultsconfignameframeworkarchnamemodel is none modelclasses if basearchname not in tinymodelsummary tinymodelsummarybasearchname tinymodelsummarybasearchname update tokenizerclasses tokenizerclasses processorclasses processorclasses tinymodelsummarybasearchnamemodelclasses sorted tinymodelsummarybasearchname getmodelclasses modelclasses if organization is not none reponame ftinyrandombasearchname composite models checkpoints have more precise repo names on the hub if basearchname in compositemodels reponame ftinyrandomcompositemodelsbasearchname repoid forganizationreponame try commithash hfapi repoinforepoid tokentoken sha except exception the directory is not created but processors isare included in results logger warningffailed to get information for repoid ntraceback formatexc del tinymodelsummarybasearchname continue tinymodelsummarybasearchnamesha commithash return tinymodelsummary def buildfailedreportresults includewarningtrue failedresults for configname in results if error in resultsconfigname if configname not in failedresults failedresultsconfigname failedresultsconfigname error resultsconfignameerror if includewarning and warnings in resultsconfigname if configname not in failedresults failedresultsconfigname failedresultsconfignamewarnings resultsconfignamewarnings for framework in frameworks if framework not in resultsconfigname continue for archname in resultsconfignameframework if error in resultsconfignameframeworkarchname if configname not in failedresults failedresultsconfigname if framework not in failedresultsconfigname failedresultsconfignameframework if archname not in failedresultsconfignameframework failedresultsconfignameframeworkarchname error resultsconfignameframeworkarchnameerror failedresultsconfignameframeworkarchnameerror error return failedresults def buildsimplereportresults text failedtext for configname in results for framework in frameworks if framework not in resultsconfigname continue for archname in resultsconfignameframework if error in resultsconfignameframeworkarchname result resultsconfignameframeworkarchnameerror failedtext farchname result0n else result ok text farchname result0n return text failedtext def updatetinymodelsummaryfilereportpath with openos path joinreportpath tinymodelsummary json as fp newdata json loadfp with opentestsutilstinymodelsummary json as fp data json loadfp for key value in newdata items if key not in data datakey value else for attr in tokenizerclasses processorclasses modelclasses we might get duplication here we will remove them below when creating updateddata datakeyattr extendvalueattr newsha value getsha none if newsha is not none datakeysha newsha updateddata for key in sorteddata keys updateddatakey for attr value in datakey items deduplication and sort updateddatakeyattr sortedsetvalue if attr sha else value with openos path joinreportpath updatedtinymodelsummary json w as fp json dumpupdateddata fp indent4 ensureasciifalse def createtinymodels outputpath all modeltypes modelstoskip nocheck upload organization token numworkers1 clonepath os path abspathos path dirnameos path dirnamefile if os getcwd clonepath raise valueerrorfthis script should be run from the root of the clone of transformers clonepath reportpath os path joinoutputpath reports os makedirsreportpath pytorcharchmappings x for x in dirtransformersmodule if x startswithmodel and x endswithmapping and x modelnamesmapping tensorflowarchmappings x for x in dirtransformersmodule if x startswithtfmodel and x endswithmapping pytorcharchmappings getattrtransformersmodule x for x in pytorcharchmappings tensorflowarchmappings getattrtransformersmodule x for x in tensorflowarchmappings configclasses configmapping values if not all configclasses configmappingmodeltype for modeltype in modeltypes a map from config classes to tuples of processors tokenizer feature extractor processor classes processortypemap c getprocessortypesfromconfigclassc for c in configclasses tocreate for c in configclasses processors processortypemapc models getarchitecturesfromconfigclassc pytorcharchmappings modelstoskip tfmodels getarchitecturesfromconfigclassc tensorflowarchmappings modelstoskip if lenmodels lentfmodels 0 tocreatec processor processors pytorch models tensorflow tfmodels results if numworkers 1 for c modelstocreate in listtocreate items printfcreate models for c name result buildc modelstocreate outputdiros path joinoutputpath c modeltype resultsc name result print 40 else allbuildargs for c modelstocreate in listtocreate items allbuildargs appendc modelstocreate os path joinoutputpath c modeltype with multiprocessing pool as pool results pool starmapbuild allbuildargs results buidargs0 name result for buidargs result in zipallbuildargs results if upload if organization is none raise valueerrorthe argument organization could not be none no model is uploaded toupload for modeltype in os listdiroutputpath this is the directory containing the reports if modeltype reports continue for arch in os listdiros path joinoutputpath modeltype if arch processors continue toupload appendos path joinoutputpath modeltype arch toupload sortedtoupload uploadresults if lentoupload 0 for modeldir in toupload try uploadmodelmodeldir organization token except exception as e error ffailed to upload modeldir e class name e logger errorerror uploadresultsmodeldir error with openos path joinreportpath faileduploads json w as fp json dumpuploadresults fp indent4 build the tiny model summary file the tokenizerclasses and processorclasses could be both empty lists when using the items in this file to update the file testsutilstinymodelsummary json the model architectures with tokenizerclasses and processorclasses being both empty should not be added to testsutilstinymodelsummary json tinymodelsummary buildtinymodelsummaryresults organizationorganization tokentoken with openos path joinreportpath tinymodelsummary json w as fp json dumptinymodelsummary fp indent4 with openos path joinreportpath tinymodelcreationreport json w as fp json dumpresults fp indent4 build the warningfailure report json format same format as the complete results except this contains only warnings or errors failedresults buildfailedreportresults with openos path joinreportpath failedreport json w as fp json dumpfailedresults fp indent4 simplereport failedreport buildsimplereportresults the simplified report a txt file with each line of format model architecture name ok or error message with openos path joinreportpath simplereport txt w as fp fp writesimplereport the simplified failure report same above except this only contains line with errors with openos path joinreportpath simplefailedreport txt w as fp fp writefailedreport updatetinymodelsummaryfilereportpathos path joinoutputpath reports if name main this has to be spawn to avoid hanging forever multiprocessing setstartmethodspawn def liststrvalues return values split parser argparse argumentparser parser addargumentall actionstoretrue helpwill create all tiny models parser addargument nocheck actionstoretrue helpif set will not check the validity of architectures use with caution parser addargument m modeltypes typeliststr helpcommaseparated list of model types from which the tiny models will be created parser addargument modelstoskip typeliststr help commaseparated list of model class namess from which the tiny models won t be created nthis is usually the list of model classes that have their tiny versions already uploaded to the hub parser addargumentupload actionstoretrue helpif to upload the created tiny models to the hub parser addargument organization defaultnone typestr helpthe organization on the hub to which the tiny models will be uploaded parser addargument token defaultnone typestr helpa valid authentication token for huggingface hub with write access parser addargumentoutputpath typepath helppath indicating where to store generated model parser addargumentnumworkers default1 typeint helpthe number of workers to run args parser parseargs if not args all and not args modeltypes raise valueerrorplease provide at least one model type or pass all to export all architectures createtinymodels args outputpath args all args modeltypes args modelstoskip args nocheck args upload args organization args token args numworkers coding utf 8 2022 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license make sure tokenizer plays nice with multiprocessing this list contains the model architectures for which a tiny version could not be created avoid to add new architectures here unless we have verified carefully that it s almost impossible to create them one such case is no model tester class is implemented for a model type like mt5 because its architecture is identical to another one mt5 is based on t5 but trained on different datasets or with different techniques return a tuple of processors for config_class we use tuple here to include potentially both slow fast tokenizers to make a uniform return type check first if a model has processormixin otherwise check if it has tokenizers and or an image processor or a feature extractor remark some configurations have no processor at all for example generic composite models like encoderdecodermodel is used for any compatible text models also decisiontransformer doesn t require any processor we might get none for some tokenizers remove them here return a tuple of all possible architectures attributed to a configuration class config_class for example bertconfig bertmodel bertformaskedlm bertforquestionanswering a model architecture could appear in several mappings for example bartforconditionalgeneration is in model_for_pretraining_mapping_names model_with_lm_head_mapping_names model_for_masked_lm_mapping_names model_for_seq_to_seq_causal_lm_mapping_names we avoid the duplication get the config class from a processor class some config model classes use tokenizers feature_extractors from other models for example gpt j uses gpt2tokenizer if no checkpoint is found for a config class or a checkpoint is found without necessary file s to create the processor for processor_class we get the config class that corresponds to processor_class and use it to find a checkpoint in order to create the processor wav2vec2ctctokenizer wav2vec2config find the new configuration class create a processor for processor_class if a processor is not able to be built with the original arguments this method tries to change the arguments and call itself recursively by inferring a new config_class or a new processor_class from another one in order to find a checkpoint containing the necessary files to build a processor the processor is not saved here instead it will be saved in convert_processors after further changes in convert_processors for each model architecture a copy will be created and saved along the built model currently this solely uses the docstring in the source file of config_class to find a checkpoint try to get the checkpoint from the config class for processor_class this helps cases like xclipconfig and videomaefeatureextractor to find a checkpoint from videomaeconfig try to get a new processor class from checkpoint this is helpful for a checkpoint without necessary file to load processor while processor_class is an auto class for example sew has wav2vec2processor in processor_mapping_names its tokenizer_class is autotokenizer and the checkpoint https huggingface co asapp sew tiny 100k has no tokenizer file but we can get tokenizer_class wav2vec2ctctokenizer from the config file the new processor class won t be able to load from checkpoint but it helps this recursive method to find a way to build a processor if tokenizer_class is not specified in config let s use config to get the process class via auto mappings but only allow the tokenizer mapping being used this is to make wav2vec2conformer build used to avoid infinite recursion between a pair of fast slow tokenizer types let s use fast tokenizer if there is any try to build each component tokenizer feature extractor of a processormixin this could be a tuple for tokenizers for example clipprocessor has feature_extractor_class clipfeatureextractor tokenizer_class cliptokenizer cliptokenizerfast try to build a processormixin so we can return a single value checkpoint might lack some file s to load a processor for example facebook hubert base ls960 has no tokenizer file to load wav2vec2ctctokenizer in this case we try to build a processor with the configuration class for example wav2vec2config corresponding to processor_class try to create an image processor or a feature extractor without any checkpoint validation retrieve a tiny configuration from config_class using each model s modeltester args config_class subclass of pretrainedconfig returns an instance of config_class with tiny hyperparameters for model type like data2vec vision and donut swin we can t get the config model file name directly via model_type as it would be sth like configuration_data2vec_vision py a simple way is to use inspect getsourcefile config_class the modeling file name without prefix modeling_ and postfix py find the model tester class sort with the length of the class names first then the alphabetical order this is to avoid t5encoderonlymodeltest is used instead of t5modeltest which has is_encoder_decoder false and causes some pipeline tests failing also failures in optimum ci todo more fine grained control of the desired tester class clip like models have text_model_tester and vision_model_tester and we need to pass vocab_size to text_model_tester via text_kwargs the same trick is also necessary for flava parent is an instance of unittest testcase but we don t need it here poolformer has no get_config defined furthermore it s better to use prepare_config_and_inputs even if get_config is defined since there might be some extra changes in prepare_config_and_inputs make sure this is long enough some model tester has 20 for this attr to pass text generation pipeline tests make sure it at least runs speech2textmodel specific change a processor to work with smaller inputs for tokenizers we try to reduce their vocabulary size for feature extractor we use smaller image size or change other attributes using the values from tiny_config see convert_feature_extractor this method should not fail we catch the errors and put them in result warnings with descriptive messages set tokenizer s to none if the fast slow tokenizers have different values for vocab_size or length if keep_fast_tokenizer true the fast tokenizer will be kept sanity check 1 fast and slow tokenizers should be compatible vocab_size sanity check 2 fast and slow tokenizers should be compatible length currently we only have these 2 possibilities check the built processors have the unique type if the original fast slow tokenizers don t correspond keep only the fast tokenizer this doesn t necessarily imply the fast slow tokenizers in a single hub repo has issues it s more of an issue in build_processor which tries to get a checkpoint with as much effort as possible for yosomodel which uses alberttokenizer fast its real hub checkpoint doesn t contain valid files to load the slower tokenizer alberttokenizer and it ends up finding the canonical checkpoint of albertmodel which has different vocabulary todo try to improve build_processor s definition and or usage to avoid the above situation in the first place wav2vec2forctc byt5tokenizer etc all are already small enough and have no fast version that can be retrained if fast_tokenizer exists slow_tokenizer should correspond to it make sure the fast tokenizer can be saved we don t save it to output_folder at this moment only at the end of this function let s just keep the fast version if the possibly converted fast slow tokenizers don t correspond set them to none and use the original tokenizers if there is any conversion failed we keep the original tokenizers let s use the original version at the end original_fast_tokenizer and original_slow_tokenizer make sure the fast tokenizer can be saved we don t save it to output_folder at this moment only at the end of this function make sure the slow tokenizer can be saved we don t save it to output_folder at this moment only at the end of this function update feature extractors using the tiny config get framework agnostic architecture name used to save all pt tf flax models into the same directory create and save a model for model_arch also copy the set of processors to each model under the same model type output folder copy the same set of processors for a model type to the model arch specific folder fill result with errors for all target model arch if we can t build processor upload the tiny models open a pr on the existing hub repo todo we need this information push to hub repo directly this prints a progress bar with the upload these will be removed at the end if they are empty not encoder decoder but encoder encoder we just keep the same name as above to make code easier build encoder build decoder build encoder decoder specify these explicitly for encoder decoder like models but not for vision text dual encoder as it has no decoder copy the processors fill result use tokenizer to get the values of bos_token_id eos_token_ids etc the argument token_id_name should be a string ending with _token_id and original_token_id should be an integer that will be return if tokenizer has no token corresponding to token_id_name bark configuration is too special let s just not handle this for now check if there is any tokenizer prefer fast version if any get some properties of the already converted tokenizer smaller vocab size special token ids etc we use len tokenizer instead of tokenizer vocab_size to avoid potential issues for tokenizers with non empty added_tokens_encoder one example is the debertav2tokenizer where the mask token is the extra token the original checkpoint has length 35998 but it doesn t have ids 30400 and 30514 but instead 35998 and 35999 used to create a new model tester with tokenizer vocab_size in order to get the updated special token ids fsmtmodeltester accepts src_vocab_size and tgt_vocab_size but not vocab_size handle the possibility of text_config inside _tiny_config for clip like models owlvit groupvit etc collect values of some special token ids using the token id values from tokenizer instead of from _tiny_config fsmtconfig has decoderconfig as decoder attribute create all models for a certain model type args config_class pretrainedconfig a subclass of pretrainedconfig that is used to determine models_to_create models_to_create dict a dictionary containing the processor model classes that we want to create the instances these models are of the same model type which is associated to config_class output_dir str the directory to save all the checkpoints each model architecture will be saved in a subdirectory under it models in different frameworks with the same architecture will be saved in the same subdirectory these will be removed at the end if they are empty build processors convert the processors reduce vocabulary size smaller image size etc just for us to see this easily in the report update attributes that vocab_size involves so far we only have to deal with text_config as config_overrides contains text related attributes only fuyuconfig saves data under both fuyuconfig and its text_config this is not good but let s just update every involved fields to avoid potential failure if text_config_dict exists we need to update its value here too in order to make save_pretrained from_pretrained work update result processor make pt tf weights compatible remove tf use the same weights from pytorch conversion may fail let s not create a model with different weights to avoid confusion for now build a summary a dictionary of the form model architecture name tokenizer_classes processor_classes model_classes tiny model is not created for arch_name composite models checkpoints have more precise repo names on the hub the directory is not created but processor s is are included in results we might get duplication here we will remove them below when creating updated_data deduplication and sort a map from config classes to tuples of processors tokenizer feature extractor processor classes this is the directory containing the reports build the tiny model summary file the tokenizer_classes and processor_classes could be both empty lists when using the items in this file to update the file tests utils tiny_model_summary json the model architectures with tokenizer_classes and processor_classes being both empty should not be added to tests utils tiny_model_summary json build the warning failure report json format same format as the complete results except this contains only warnings or errors the simplified report a txt file with each line of format model architecture name ok or error message the simplified failure report same above except this only contains line with errors this has to be spawn to avoid hanging forever
import argparse import collections.abc import copy import inspect import json import multiprocessing import os import shutil import tempfile import traceback from pathlib import Path from check_config_docstrings import get_checkpoint_from_config_class from datasets import load_dataset from get_test_info import get_model_to_tester_mapping, get_tester_classes_for_model from huggingface_hub import Repository, create_repo, hf_api, upload_folder from transformers import ( CONFIG_MAPPING, FEATURE_EXTRACTOR_MAPPING, IMAGE_PROCESSOR_MAPPING, PROCESSOR_MAPPING, TOKENIZER_MAPPING, AutoTokenizer, LayoutLMv3TokenizerFast, PreTrainedTokenizer, PreTrainedTokenizerFast, logging, ) from transformers.feature_extraction_utils import FeatureExtractionMixin from transformers.file_utils import is_tf_available, is_torch_available from transformers.image_processing_utils import BaseImageProcessor from transformers.models.auto.configuration_auto import AutoConfig, model_type_to_module_name from transformers.models.fsmt import configuration_fsmt from transformers.processing_utils import ProcessorMixin, transformers_module from transformers.tokenization_utils_base import PreTrainedTokenizerBase os.environ["TOKENIZERS_PARALLELISM"] = "false" logging.set_verbosity_error() logging.disable_progress_bar() logger = logging.get_logger(__name__) os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" if not is_torch_available(): raise ValueError("Please install PyTorch.") if not is_tf_available(): raise ValueError("Please install TensorFlow.") FRAMEWORKS = ["pytorch", "tensorflow"] INVALID_ARCH = [] TARGET_VOCAB_SIZE = 1024 data = {"training_ds": None, "testing_ds": None} COMPOSITE_MODELS = { "EncoderDecoderModel": "EncoderDecoderModel-bert-bert", "SpeechEncoderDecoderModel": "SpeechEncoderDecoderModel-wav2vec2-bert", "VisionEncoderDecoderModel": "VisionEncoderDecoderModel-vit-gpt2", "VisionTextDualEncoderModel": "VisionTextDualEncoderModel-vit-bert", } UNCONVERTIBLE_MODEL_ARCHITECTURES = { "BertGenerationEncoder", "BertGenerationDecoder", "CamembertForSequenceClassification", "CamembertForMultipleChoice", "CamembertForMaskedLM", "CamembertForCausalLM", "CamembertForTokenClassification", "CamembertForQuestionAnswering", "CamembertModel", "TFCamembertForMultipleChoice", "TFCamembertForTokenClassification", "TFCamembertForQuestionAnswering", "TFCamembertForSequenceClassification", "TFCamembertForMaskedLM", "TFCamembertModel", "TFCamembertForCausalLM", "DecisionTransformerModel", "GraphormerModel", "InformerModel", "JukeboxModel", "MarianForCausalLM", "MaskFormerSwinModel", "MaskFormerSwinBackbone", "MT5Model", "MT5ForConditionalGeneration", "UMT5ForConditionalGeneration", "TFMT5ForConditionalGeneration", "TFMT5Model", "QDQBertForSequenceClassification", "QDQBertForMaskedLM", "QDQBertModel", "QDQBertForTokenClassification", "QDQBertLMHeadModel", "QDQBertForMultipleChoice", "QDQBertForQuestionAnswering", "QDQBertForNextSentencePrediction", "ReformerModelWithLMHead", "RetriBertModel", "Speech2Text2ForCausalLM", "TimeSeriesTransformerModel", "TrajectoryTransformerModel", "TrOCRForCausalLM", "XLMProphetNetForConditionalGeneration", "XLMProphetNetForCausalLM", "XLMProphetNetModel", "XLMRobertaModel", "XLMRobertaForTokenClassification", "XLMRobertaForMultipleChoice", "XLMRobertaForMaskedLM", "XLMRobertaForCausalLM", "XLMRobertaForSequenceClassification", "XLMRobertaForQuestionAnswering", "TFXLMRobertaForSequenceClassification", "TFXLMRobertaForMaskedLM", "TFXLMRobertaForCausalLM", "TFXLMRobertaForQuestionAnswering", "TFXLMRobertaModel", "TFXLMRobertaForMultipleChoice", "TFXLMRobertaForTokenClassification", } def get_processor_types_from_config_class(config_class, allowed_mappings=None): def _to_tuple(x): if not isinstance(x, collections.abc.Sequence): x = (x,) else: x = tuple(x) return x if allowed_mappings is None: allowed_mappings = ["processor", "tokenizer", "image_processor", "feature_extractor"] processor_types = () if config_class in PROCESSOR_MAPPING and "processor" in allowed_mappings: processor_types = _to_tuple(PROCESSOR_MAPPING[config_class]) else: if config_class in TOKENIZER_MAPPING and "tokenizer" in allowed_mappings: processor_types = TOKENIZER_MAPPING[config_class] if config_class in IMAGE_PROCESSOR_MAPPING and "image_processor" in allowed_mappings: processor_types += _to_tuple(IMAGE_PROCESSOR_MAPPING[config_class]) elif config_class in FEATURE_EXTRACTOR_MAPPING and "feature_extractor" in allowed_mappings: processor_types += _to_tuple(FEATURE_EXTRACTOR_MAPPING[config_class]) processor_types = tuple(p for p in processor_types if p is not None) return processor_types def get_architectures_from_config_class(config_class, arch_mappings, models_to_skip=None): architectures = set() if models_to_skip is None: models_to_skip = [] models_to_skip = UNCONVERTIBLE_MODEL_ARCHITECTURES.union(models_to_skip) for mapping in arch_mappings: if config_class in mapping: models = mapping[config_class] models = tuple(models) if isinstance(models, collections.abc.Sequence) else (models,) for model in models: if model.__name__ not in models_to_skip: architectures.add(model) architectures = tuple(architectures) return architectures def get_config_class_from_processor_class(processor_class): processor_prefix = processor_class.__name__ for postfix in ["TokenizerFast", "Tokenizer", "ImageProcessor", "FeatureExtractor", "Processor"]: processor_prefix = processor_prefix.replace(postfix, "") if processor_prefix == "Wav2Vec2CTC": processor_prefix = "Wav2Vec2" new_config_name = f"{processor_prefix}Config" new_config_class = getattr(transformers_module, new_config_name) return new_config_class def build_processor(config_class, processor_class, allow_no_checkpoint=False): checkpoint = get_checkpoint_from_config_class(config_class) if checkpoint is None: config_class_from_processor_class = get_config_class_from_processor_class(processor_class) checkpoint = get_checkpoint_from_config_class(config_class_from_processor_class) processor = None try: processor = processor_class.from_pretrained(checkpoint) except Exception as e: logger.error(f"{e.__class__.__name__}: {e}") if ( processor is None and checkpoint is not None and issubclass(processor_class, (PreTrainedTokenizerBase, AutoTokenizer)) ): try: config = AutoConfig.from_pretrained(checkpoint) except Exception as e: logger.error(f"{e.__class__.__name__}: {e}") config = None if config is not None: if not isinstance(config, config_class): raise ValueError( f"`config` (which is of type {config.__class__.__name__}) should be an instance of `config_class`" f" ({config_class.__name__})!" ) tokenizer_class = config.tokenizer_class new_processor_class = None if tokenizer_class is not None: new_processor_class = getattr(transformers_module, tokenizer_class) if new_processor_class != processor_class: processor = build_processor(config_class, new_processor_class) if processor is None: new_processor_classes = get_processor_types_from_config_class( config.__class__, allowed_mappings=["tokenizer"] ) names = [ x.__name__.replace("Fast", "") for x in [processor_class, new_processor_class] if x is not None ] new_processor_classes = [ x for x in new_processor_classes if x is not None and x.__name__.replace("Fast", "") not in names ] if len(new_processor_classes) > 0: new_processor_class = new_processor_classes[0] for x in new_processor_classes: if x.__name__.endswith("Fast"): new_processor_class = x break processor = build_processor(config_class, new_processor_class) if processor is None: if issubclass(processor_class, ProcessorMixin): attrs = {} for attr_name in processor_class.attributes: attrs[attr_name] = [] attr_class_names = getattr(processor_class, f"{attr_name}_class") if not isinstance(attr_class_names, tuple): attr_class_names = (attr_class_names,) for name in attr_class_names: attr_class = getattr(transformers_module, name) attr = build_processor(config_class, attr_class) if attr is not None: attrs[attr_name].append(attr) if all(len(v) > 0 for v in attrs.values()): try: processor = processor_class(**{k: v[0] for k, v in attrs.items()}) except Exception as e: logger.error(f"{e.__class__.__name__}: {e}") else: config_class_from_processor_class = get_config_class_from_processor_class(processor_class) if config_class_from_processor_class != config_class: processor = build_processor(config_class_from_processor_class, processor_class) if ( processor is None and allow_no_checkpoint and (issubclass(processor_class, BaseImageProcessor) or issubclass(processor_class, FeatureExtractionMixin)) ): try: processor = processor_class() except Exception as e: logger.error(f"{e.__class__.__name__}: {e}") if processor is not None: if not (isinstance(processor, processor_class) or processor_class.__name__.startswith("Auto")): raise ValueError( f"`processor` (which is of type {processor.__class__.__name__}) should be an instance of" f" {processor_class.__name__} or an Auto class!" ) return processor def get_tiny_config(config_class, model_class=None, **model_tester_kwargs): model_type = config_class.model_type config_source_file = inspect.getsourcefile(config_class) modeling_name = config_source_file.split(os.path.sep)[-1].replace("configuration_", "").replace(".py", "") try: print("Importing", model_type_to_module_name(model_type)) module_name = model_type_to_module_name(model_type) if not modeling_name.startswith(module_name): raise ValueError(f"{modeling_name} doesn't start with {module_name}!") test_file = os.path.join("tests", "models", module_name, f"test_modeling_{modeling_name}.py") models_to_model_testers = get_model_to_tester_mapping(test_file) model_tester_class = None tester_classes = [] if model_class is not None: tester_classes = get_tester_classes_for_model(test_file, model_class) else: for _tester_classes in models_to_model_testers.values(): tester_classes.extend(_tester_classes) if len(tester_classes) > 0: model_tester_class = sorted(tester_classes, key=lambda x: (len(x.__name__), x.__name__))[0] except ModuleNotFoundError: error = f"Tiny config not created for {model_type} - cannot find the testing module from the model name." raise ValueError(error) if model_tester_class is None: error = f"Tiny config not created for {model_type} - no model tester is found in the testing module." raise ValueError(error) if "vocab_size" in model_tester_kwargs: if "text_kwargs" in inspect.signature(model_tester_class.__init__).parameters.keys(): vocab_size = model_tester_kwargs.pop("vocab_size") model_tester_kwargs["text_kwargs"] = {"vocab_size": vocab_size} model_tester = model_tester_class(parent=None, **model_tester_kwargs) if hasattr(model_tester, "get_pipeline_config"): config = model_tester.get_pipeline_config() elif hasattr(model_tester, "prepare_config_and_inputs"): config = model_tester.prepare_config_and_inputs()[0] elif hasattr(model_tester, "get_config"): config = model_tester.get_config() else: error = ( f"Tiny config not created for {model_type} - the model tester {model_tester_class.__name__} lacks" " necessary method to create config." ) raise ValueError(error) max_positions = [] for key in ["max_position_embeddings", "max_source_positions", "max_target_positions"]: if getattr(config, key, 0) > 0: max_positions.append(getattr(config, key)) if getattr(config, "text_config", None) is not None: if getattr(config.text_config, key, None) is not None: max_positions.append(getattr(config.text_config, key)) if len(max_positions) > 0: max_position = max(200, min(max_positions)) for key in ["max_position_embeddings", "max_source_positions", "max_target_positions"]: if getattr(config, key, 0) > 0: setattr(config, key, max_position) if getattr(config, "text_config", None) is not None: if getattr(config.text_config, key, None) is not None: setattr(config.text_config, key, max_position) return config def convert_tokenizer(tokenizer_fast: PreTrainedTokenizerFast): new_tokenizer = tokenizer_fast.train_new_from_iterator( data["training_ds"]["text"], TARGET_VOCAB_SIZE, show_progress=False ) if not isinstance(new_tokenizer, LayoutLMv3TokenizerFast): new_tokenizer(data["testing_ds"]["text"]) return new_tokenizer def convert_feature_extractor(feature_extractor, tiny_config): to_convert = False kwargs = {} if hasattr(tiny_config, "image_size"): kwargs["size"] = tiny_config.image_size kwargs["crop_size"] = tiny_config.image_size to_convert = True elif ( hasattr(tiny_config, "vision_config") and tiny_config.vision_config is not None and hasattr(tiny_config.vision_config, "image_size") ): kwargs["size"] = tiny_config.vision_config.image_size kwargs["crop_size"] = tiny_config.vision_config.image_size to_convert = True if hasattr(tiny_config, "input_feat_per_channel"): kwargs["feature_size"] = tiny_config.input_feat_per_channel kwargs["num_mel_bins"] = tiny_config.input_feat_per_channel to_convert = True if to_convert: feature_extractor = feature_extractor.__class__(**kwargs) return feature_extractor def convert_processors(processors, tiny_config, output_folder, result): def _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=False): if fast_tokenizer is not None and slow_tokenizer is not None: if fast_tokenizer.vocab_size != slow_tokenizer.vocab_size: warning_messagae = ( "The fast/slow tokenizers " f"({fast_tokenizer.__class__.__name__}/{slow_tokenizer.__class__.__name__}) have different " "vocabulary size: " f"fast_tokenizer.vocab_size = {fast_tokenizer.vocab_size} and " f"slow_tokenizer.vocab_size = {slow_tokenizer.vocab_size}." ) result["warnings"].append(warning_messagae) if not keep_fast_tokenizer: fast_tokenizer = None slow_tokenizer = None if fast_tokenizer is not None and slow_tokenizer is not None: if len(fast_tokenizer) != len(slow_tokenizer): warning_messagae = ( f"The fast/slow tokenizers () have different length: " f"len(fast_tokenizer) = {len(fast_tokenizer)} and " f"len(slow_tokenizer) = {len(slow_tokenizer)}." ) result["warnings"].append(warning_messagae) if not keep_fast_tokenizer: fast_tokenizer = None slow_tokenizer = None return fast_tokenizer, slow_tokenizer tokenizers = [] feature_extractors = [] for processor in processors: if isinstance(processor, PreTrainedTokenizerBase): if processor.__class__.__name__ not in {x.__class__.__name__ for x in tokenizers}: tokenizers.append(processor) elif isinstance(processor, BaseImageProcessor): if processor.__class__.__name__ not in {x.__class__.__name__ for x in feature_extractors}: feature_extractors.append(processor) elif isinstance(processor, FeatureExtractionMixin): if processor.__class__.__name__ not in {x.__class__.__name__ for x in feature_extractors}: feature_extractors.append(processor) elif isinstance(processor, ProcessorMixin): if hasattr(processor, "tokenizer"): if processor.tokenizer.__class__.__name__ not in {x.__class__.__name__ for x in tokenizers}: tokenizers.append(processor.tokenizer) if hasattr(processor, "image_processor"): if processor.image_processor.__class__.__name__ not in { x.__class__.__name__ for x in feature_extractors }: feature_extractors.append(processor.image_processor) elif hasattr(processor, "feature_extractor"): if processor.feature_extractor.__class__.__name__ not in { x.__class__.__name__ for x in feature_extractors }: feature_extractors.append(processor.feature_extractor) num_types = len({x.__class__.__name__ for x in feature_extractors}) if num_types >= 2: raise ValueError(f"`feature_extractors` should contain at most 1 type, but it contains {num_types} types!") num_types = len({x.__class__.__name__.replace("Fast", "") for x in tokenizers}) if num_types >= 2: raise ValueError(f"`tokenizers` should contain at most 1 tokenizer type, but it contains {num_types} types!") fast_tokenizer = None slow_tokenizer = None for tokenizer in tokenizers: if isinstance(tokenizer, PreTrainedTokenizerFast): fast_tokenizer = tokenizer else: slow_tokenizer = tokenizer fast_tokenizer, slow_tokenizer = _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=True) original_fast_tokenizer, original_slow_tokenizer = fast_tokenizer, slow_tokenizer if fast_tokenizer: try: if fast_tokenizer.vocab_size > TARGET_VOCAB_SIZE: fast_tokenizer = convert_tokenizer(fast_tokenizer) except Exception: result["warnings"].append( ( f"Failed to convert the fast tokenizer for {fast_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) if fast_tokenizer: try: with tempfile.TemporaryDirectory() as tmpdir: fast_tokenizer.save_pretrained(tmpdir) try: slow_tokenizer = AutoTokenizer.from_pretrained(tmpdir, use_fast=False) except Exception: result["warnings"].append( ( f"Failed to load the slow tokenizer saved from {fast_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) slow_tokenizer = None except Exception: result["warnings"].append( ( f"Failed to save the fast tokenizer for {fast_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) fast_tokenizer = None fast_tokenizer, slow_tokenizer = _sanity_check(fast_tokenizer, slow_tokenizer, keep_fast_tokenizer=False) if (original_fast_tokenizer is not None and fast_tokenizer is None) or ( original_slow_tokenizer is not None and slow_tokenizer is None ): warning_messagae = ( "There are some issues when converting the fast/slow tokenizers. The original tokenizers from the Hub " " will be used instead." ) result["warnings"].append(warning_messagae) fast_tokenizer = original_fast_tokenizer slow_tokenizer = original_slow_tokenizer if fast_tokenizer: with tempfile.TemporaryDirectory() as tmpdir: try: fast_tokenizer.save_pretrained(tmpdir) except Exception: result["warnings"].append( ( f"Failed to save the fast tokenizer for {fast_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) fast_tokenizer = None if slow_tokenizer: with tempfile.TemporaryDirectory() as tmpdir: try: slow_tokenizer.save_pretrained(tmpdir) except Exception: result["warnings"].append( ( f"Failed to save the slow tokenizer for {slow_tokenizer.__class__.__name__}.", traceback.format_exc(), ) ) slow_tokenizer = None try: feature_extractors = [convert_feature_extractor(p, tiny_config) for p in feature_extractors] except Exception: result["warnings"].append( ( "Failed to convert feature extractors.", traceback.format_exc(), ) ) feature_extractors = [] if hasattr(tiny_config, "max_position_embeddings") and tiny_config.max_position_embeddings > 0: if fast_tokenizer is not None: if fast_tokenizer.__class__.__name__ in [ "RobertaTokenizerFast", "XLMRobertaTokenizerFast", "LongformerTokenizerFast", "MPNetTokenizerFast", ]: fast_tokenizer.model_max_length = tiny_config.max_position_embeddings - 2 else: fast_tokenizer.model_max_length = tiny_config.max_position_embeddings if slow_tokenizer is not None: if slow_tokenizer.__class__.__name__ in [ "RobertaTokenizer", "XLMRobertaTokenizer", "LongformerTokenizer", "MPNetTokenizer", ]: slow_tokenizer.model_max_length = tiny_config.max_position_embeddings - 2 else: slow_tokenizer.model_max_length = tiny_config.max_position_embeddings processors = [fast_tokenizer, slow_tokenizer] + feature_extractors processors = [p for p in processors if p is not None] for p in processors: p.save_pretrained(output_folder) return processors def get_checkpoint_dir(output_dir, model_arch): arch_name = model_arch.__name__ if arch_name.startswith("TF"): arch_name = arch_name[2:] elif arch_name.startswith("Flax"): arch_name = arch_name[4:] return os.path.join(output_dir, arch_name) def build_model(model_arch, tiny_config, output_dir): checkpoint_dir = get_checkpoint_dir(output_dir, model_arch) processor_output_dir = os.path.join(output_dir, "processors") if os.path.isdir(processor_output_dir): shutil.copytree(processor_output_dir, checkpoint_dir, dirs_exist_ok=True) tiny_config = copy.deepcopy(tiny_config) if any(model_arch.__name__.endswith(x) for x in ["ForCausalLM", "LMHeadModel"]): tiny_config.is_encoder_decoder = False tiny_config.is_decoder = True model = model_arch(config=tiny_config) model.save_pretrained(checkpoint_dir) model.from_pretrained(checkpoint_dir) return model def fill_result_with_error(result, error, trace, models_to_create): error = (error, trace) result["error"] = error for framework in FRAMEWORKS: if framework in models_to_create: result[framework] = {} for model_arch in models_to_create[framework]: result[framework][model_arch.__name__] = {"model": None, "checkpoint": None, "error": error} result["processor"] = {p.__class__.__name__: p.__class__.__name__ for p in result["processor"].values()} def upload_model(model_dir, organization, token): arch_name = model_dir.split(os.path.sep)[-1] repo_name = f"tiny-random-{arch_name}" repo_id = f"{organization}/{repo_name}" repo_exist = False error = None try: create_repo(repo_id=repo_id, exist_ok=False, repo_type="model", token=token) except Exception as e: error = e if "You already created" in str(e): error = None logger.warning("Remote repository exists and will be cloned.") repo_exist = True try: create_repo(repo_id=repo_id, exist_ok=True, repo_type="model", token=token) except Exception as e: error = e if error is not None: raise error with tempfile.TemporaryDirectory() as tmpdir: repo = Repository(local_dir=tmpdir, clone_from=repo_id, token=token) repo.git_pull() shutil.copytree(model_dir, tmpdir, dirs_exist_ok=True) if repo_exist: hub_pr_url = upload_folder( folder_path=model_dir, repo_id=repo_id, repo_type="model", commit_message=f"Update tiny models for {arch_name}", commit_description=f"Upload tiny models for {arch_name}", create_pr=True, token=token, ) logger.warning(f"PR open in {hub_pr_url}.") else: repo.git_add(auto_lfs_track=True) repo.git_commit(f"Upload tiny models for {arch_name}") repo.git_push(blocking=True) logger.warning(f"Tiny models {arch_name} pushed to {repo_id}.") def build_composite_models(config_class, output_dir): import tempfile from transformers import ( BertConfig, BertLMHeadModel, BertModel, BertTokenizer, BertTokenizerFast, EncoderDecoderModel, GPT2Config, GPT2LMHeadModel, GPT2Tokenizer, GPT2TokenizerFast, SpeechEncoderDecoderModel, TFEncoderDecoderModel, TFVisionEncoderDecoderModel, TFVisionTextDualEncoderModel, VisionEncoderDecoderModel, VisionTextDualEncoderModel, ViTConfig, ViTFeatureExtractor, ViTModel, Wav2Vec2Config, Wav2Vec2Model, Wav2Vec2Processor, ) result = {"error": None, "warnings": []} if config_class.model_type == "encoder-decoder": encoder_config_class = BertConfig decoder_config_class = BertConfig encoder_processor = (BertTokenizerFast, BertTokenizer) decoder_processor = (BertTokenizerFast, BertTokenizer) encoder_class = BertModel decoder_class = BertLMHeadModel model_class = EncoderDecoderModel tf_model_class = TFEncoderDecoderModel elif config_class.model_type == "vision-encoder-decoder": encoder_config_class = ViTConfig decoder_config_class = GPT2Config encoder_processor = (ViTFeatureExtractor,) decoder_processor = (GPT2TokenizerFast, GPT2Tokenizer) encoder_class = ViTModel decoder_class = GPT2LMHeadModel model_class = VisionEncoderDecoderModel tf_model_class = TFVisionEncoderDecoderModel elif config_class.model_type == "speech-encoder-decoder": encoder_config_class = Wav2Vec2Config decoder_config_class = BertConfig encoder_processor = (Wav2Vec2Processor,) decoder_processor = (BertTokenizerFast, BertTokenizer) encoder_class = Wav2Vec2Model decoder_class = BertLMHeadModel model_class = SpeechEncoderDecoderModel tf_model_class = None elif config_class.model_type == "vision-text-dual-encoder": encoder_config_class = ViTConfig decoder_config_class = BertConfig encoder_processor = (ViTFeatureExtractor,) decoder_processor = (BertTokenizerFast, BertTokenizer) encoder_class = ViTModel decoder_class = BertModel model_class = VisionTextDualEncoderModel tf_model_class = TFVisionTextDualEncoderModel with tempfile.TemporaryDirectory() as tmpdir: try: models_to_create = {"processor": encoder_processor, "pytorch": (encoder_class,), "tensorflow": []} encoder_output_dir = os.path.join(tmpdir, "encoder") build(encoder_config_class, models_to_create, encoder_output_dir) models_to_create = {"processor": decoder_processor, "pytorch": (decoder_class,), "tensorflow": []} decoder_output_dir = os.path.join(tmpdir, "decoder") build(decoder_config_class, models_to_create, decoder_output_dir) encoder_path = os.path.join(encoder_output_dir, encoder_class.__name__) decoder_path = os.path.join(decoder_output_dir, decoder_class.__name__) if config_class.model_type != "vision-text-dual-encoder": decoder_config = decoder_config_class.from_pretrained(decoder_path) decoder_config.is_decoder = True decoder_config.add_cross_attention = True model = model_class.from_encoder_decoder_pretrained( encoder_path, decoder_path, decoder_config=decoder_config, ) elif config_class.model_type == "vision-text-dual-encoder": model = model_class.from_vision_text_pretrained(encoder_path, decoder_path) model_path = os.path.join( output_dir, f"{model_class.__name__}-{encoder_config_class.model_type}-{decoder_config_class.model_type}", ) model.save_pretrained(model_path) if tf_model_class is not None: model = tf_model_class.from_pretrained(model_path) model.save_pretrained(model_path) encoder_processor_path = os.path.join(encoder_output_dir, "processors") decoder_processor_path = os.path.join(decoder_output_dir, "processors") if os.path.isdir(encoder_processor_path): shutil.copytree(encoder_processor_path, model_path, dirs_exist_ok=True) if os.path.isdir(decoder_processor_path): shutil.copytree(decoder_processor_path, model_path, dirs_exist_ok=True) result["processor"] = {x.__name__: x.__name__ for x in encoder_processor + decoder_processor} result["pytorch"] = {model_class.__name__: {"model": model_class.__name__, "checkpoint": model_path}} result["tensorflow"] = {} if tf_model_class is not None: result["tensorflow"] = { tf_model_class.__name__: {"model": tf_model_class.__name__, "checkpoint": model_path} } except Exception: result["error"] = ( f"Failed to build models for {config_class.__name__}.", traceback.format_exc(), ) if not result["error"]: del result["error"] if not result["warnings"]: del result["warnings"] return result def get_token_id_from_tokenizer(token_id_name, tokenizer, original_token_id): token_id = original_token_id if not token_id_name.endswith("_token_id"): raise ValueError(f"`token_id_name` is {token_id_name}, which doesn't end with `_token_id`!") token = getattr(tokenizer, token_id_name.replace("_token_id", "_token"), None) if token is not None: if isinstance(tokenizer, PreTrainedTokenizerFast): token_id = tokenizer._convert_token_to_id_with_added_voc(token) else: token_id = tokenizer._convert_token_to_id(token) return token_id def get_config_overrides(config_class, processors): if config_class.__name__ == "BarkConfig": return {} config_overrides = {} tokenizer = None for processor in processors: if isinstance(processor, PreTrainedTokenizerFast): tokenizer = processor break elif isinstance(processor, PreTrainedTokenizer): tokenizer = processor if tokenizer is None: return config_overrides vocab_size = len(tokenizer) if config_class.__name__ == "GPTSanJapaneseConfig": vocab_size += 2 config_overrides["vocab_size"] = vocab_size model_tester_kwargs = {"vocab_size": vocab_size} if config_class.__name__ == "FSMTConfig": del model_tester_kwargs["vocab_size"] model_tester_kwargs["src_vocab_size"] = tokenizer.src_vocab_size model_tester_kwargs["tgt_vocab_size"] = tokenizer.tgt_vocab_size _tiny_config = get_tiny_config(config_class, **model_tester_kwargs) if hasattr(_tiny_config, "text_config"): _tiny_config = _tiny_config.text_config for attr in dir(_tiny_config): if attr.endswith("_token_id"): token_id = getattr(_tiny_config, attr) if token_id is not None: token_id = get_token_id_from_tokenizer(attr, tokenizer, original_token_id=token_id) config_overrides[attr] = token_id if config_class.__name__ == "FSMTConfig": config_overrides["src_vocab_size"] = tokenizer.src_vocab_size config_overrides["tgt_vocab_size"] = tokenizer.tgt_vocab_size config_overrides["decoder"] = configuration_fsmt.DecoderConfig( vocab_size=tokenizer.tgt_vocab_size, bos_token_id=config_overrides["eos_token_id"] ) return config_overrides def build(config_class, models_to_create, output_dir): if data["training_ds"] is None or data["testing_ds"] is None: ds = load_dataset("wikitext", "wikitext-2-raw-v1") data["training_ds"] = ds["train"] data["testing_ds"] = ds["test"] if config_class.model_type in [ "encoder-decoder", "vision-encoder-decoder", "speech-encoder-decoder", "vision-text-dual-encoder", ]: return build_composite_models(config_class, output_dir) result = {k: {} for k in models_to_create} result["error"] = None result["warnings"] = [] processor_classes = models_to_create["processor"] if len(processor_classes) == 0: error = f"No processor class could be found in {config_class.__name__}." fill_result_with_error(result, error, None, models_to_create) logger.error(result["error"][0]) return result for processor_class in processor_classes: try: processor = build_processor(config_class, processor_class, allow_no_checkpoint=True) if processor is not None: result["processor"][processor_class] = processor except Exception: error = f"Failed to build processor for {processor_class.__name__}." trace = traceback.format_exc() fill_result_with_error(result, error, trace, models_to_create) logger.error(result["error"][0]) return result if len(result["processor"]) == 0: error = f"No processor could be built for {config_class.__name__}." fill_result_with_error(result, error, None, models_to_create) logger.error(result["error"][0]) return result try: tiny_config = get_tiny_config(config_class) except Exception as e: error = f"Failed to get tiny config for {config_class.__name__}: {e}" trace = traceback.format_exc() fill_result_with_error(result, error, trace, models_to_create) logger.error(result["error"][0]) return result processors = list(result["processor"].values()) processor_output_folder = os.path.join(output_dir, "processors") try: processors = convert_processors(processors, tiny_config, processor_output_folder, result) except Exception: error = "Failed to convert the processors." trace = traceback.format_exc() result["warnings"].append((error, trace)) if len(processors) == 0: error = f"No processor is returned by `convert_processors` for {config_class.__name__}." fill_result_with_error(result, error, None, models_to_create) logger.error(result["error"][0]) return result try: config_overrides = get_config_overrides(config_class, processors) except Exception as e: error = f"Failure occurs while calling `get_config_overrides`: {e}" trace = traceback.format_exc() fill_result_with_error(result, error, trace, models_to_create) logger.error(result["error"][0]) return result if "vocab_size" in config_overrides: result["vocab_size"] = config_overrides["vocab_size"] for k, v in config_overrides.items(): if hasattr(tiny_config, k): setattr(tiny_config, k, v) if ( hasattr(tiny_config, "text_config") and tiny_config.text_config is not None and hasattr(tiny_config.text_config, k) ): setattr(tiny_config.text_config, k, v) if hasattr(tiny_config, "text_config_dict"): tiny_config.text_config_dict[k] = v if result["warnings"]: logger.warning(result["warnings"][0][0]) result["processor"] = {type(p).__name__: p.__class__.__name__ for p in processors} for pytorch_arch in models_to_create["pytorch"]: result["pytorch"][pytorch_arch.__name__] = {} error = None try: model = build_model(pytorch_arch, tiny_config, output_dir=output_dir) except Exception as e: model = None error = f"Failed to create the pytorch model for {pytorch_arch}: {e}" trace = traceback.format_exc() result["pytorch"][pytorch_arch.__name__]["model"] = model.__class__.__name__ if model is not None else None result["pytorch"][pytorch_arch.__name__]["checkpoint"] = ( get_checkpoint_dir(output_dir, pytorch_arch) if model is not None else None ) if error is not None: result["pytorch"][pytorch_arch.__name__]["error"] = (error, trace) logger.error(f"{pytorch_arch.__name__}: {error}") for tensorflow_arch in models_to_create["tensorflow"]: pt_arch_name = tensorflow_arch.__name__[2:] pt_arch = getattr(transformers_module, pt_arch_name) result["tensorflow"][tensorflow_arch.__name__] = {} error = None if pt_arch.__name__ in result["pytorch"] and result["pytorch"][pt_arch.__name__]["checkpoint"] is not None: ckpt = get_checkpoint_dir(output_dir, pt_arch) try: model = tensorflow_arch.from_pretrained(ckpt) model.save_pretrained(ckpt) except Exception as e: model = None error = f"Failed to convert the pytorch model to the tensorflow model for {pt_arch}: {e}" trace = traceback.format_exc() else: try: model = build_model(tensorflow_arch, tiny_config, output_dir=output_dir) except Exception as e: model = None error = f"Failed to create the tensorflow model for {tensorflow_arch}: {e}" trace = traceback.format_exc() result["tensorflow"][tensorflow_arch.__name__]["model"] = ( model.__class__.__name__ if model is not None else None ) result["tensorflow"][tensorflow_arch.__name__]["checkpoint"] = ( get_checkpoint_dir(output_dir, tensorflow_arch) if model is not None else None ) if error is not None: result["tensorflow"][tensorflow_arch.__name__]["error"] = (error, trace) logger.error(f"{tensorflow_arch.__name__}: {error}") if not result["error"]: del result["error"] if not result["warnings"]: del result["warnings"] return result def build_tiny_model_summary(results, organization=None, token=None): tiny_model_summary = {} for config_name in results: processors = [key for key, value in results[config_name]["processor"].items()] tokenizer_classes = sorted([x for x in processors if x.endswith("TokenizerFast") or x.endswith("Tokenizer")]) processor_classes = sorted([x for x in processors if x not in tokenizer_classes]) for framework in FRAMEWORKS: if framework not in results[config_name]: continue for arch_name in results[config_name][framework]: model_classes = [arch_name] base_arch_name = arch_name[2:] if arch_name.startswith("TF") else arch_name if results[config_name][framework][arch_name]["model"] is None: model_classes = [] if base_arch_name not in tiny_model_summary: tiny_model_summary[base_arch_name] = {} tiny_model_summary[base_arch_name].update( { "tokenizer_classes": tokenizer_classes, "processor_classes": processor_classes, } ) tiny_model_summary[base_arch_name]["model_classes"] = sorted( tiny_model_summary[base_arch_name].get("model_classes", []) + model_classes ) if organization is not None: repo_name = f"tiny-random-{base_arch_name}" if base_arch_name in COMPOSITE_MODELS: repo_name = f"tiny-random-{COMPOSITE_MODELS[base_arch_name]}" repo_id = f"{organization}/{repo_name}" try: commit_hash = hf_api.repo_info(repo_id, token=token).sha except Exception: logger.warning(f"Failed to get information for {repo_id}.\n{traceback.format_exc()}") del tiny_model_summary[base_arch_name] continue tiny_model_summary[base_arch_name]["sha"] = commit_hash return tiny_model_summary def build_failed_report(results, include_warning=True): failed_results = {} for config_name in results: if "error" in results[config_name]: if config_name not in failed_results: failed_results[config_name] = {} failed_results[config_name] = {"error": results[config_name]["error"]} if include_warning and "warnings" in results[config_name]: if config_name not in failed_results: failed_results[config_name] = {} failed_results[config_name]["warnings"] = results[config_name]["warnings"] for framework in FRAMEWORKS: if framework not in results[config_name]: continue for arch_name in results[config_name][framework]: if "error" in results[config_name][framework][arch_name]: if config_name not in failed_results: failed_results[config_name] = {} if framework not in failed_results[config_name]: failed_results[config_name][framework] = {} if arch_name not in failed_results[config_name][framework]: failed_results[config_name][framework][arch_name] = {} error = results[config_name][framework][arch_name]["error"] failed_results[config_name][framework][arch_name]["error"] = error return failed_results def build_simple_report(results): text = "" failed_text = "" for config_name in results: for framework in FRAMEWORKS: if framework not in results[config_name]: continue for arch_name in results[config_name][framework]: if "error" in results[config_name][framework][arch_name]: result = results[config_name][framework][arch_name]["error"] failed_text += f"{arch_name}: {result[0]}\n" else: result = ("OK",) text += f"{arch_name}: {result[0]}\n" return text, failed_text def update_tiny_model_summary_file(report_path): with open(os.path.join(report_path, "tiny_model_summary.json")) as fp: new_data = json.load(fp) with open("tests/utils/tiny_model_summary.json") as fp: data = json.load(fp) for key, value in new_data.items(): if key not in data: data[key] = value else: for attr in ["tokenizer_classes", "processor_classes", "model_classes"]: data[key][attr].extend(value[attr]) new_sha = value.get("sha", None) if new_sha is not None: data[key]["sha"] = new_sha updated_data = {} for key in sorted(data.keys()): updated_data[key] = {} for attr, value in data[key].items(): updated_data[key][attr] = sorted(set(value)) if attr != "sha" else value with open(os.path.join(report_path, "updated_tiny_model_summary.json"), "w") as fp: json.dump(updated_data, fp, indent=4, ensure_ascii=False) def create_tiny_models( output_path, all, model_types, models_to_skip, no_check, upload, organization, token, num_workers=1, ): clone_path = os.path.abspath(os.path.dirname(os.path.dirname(__file__))) if os.getcwd() != clone_path: raise ValueError(f"This script should be run from the root of the clone of `transformers` {clone_path}") report_path = os.path.join(output_path, "reports") os.makedirs(report_path) _pytorch_arch_mappings = [ x for x in dir(transformers_module) if x.startswith("MODEL_") and x.endswith("_MAPPING") and x != "MODEL_NAMES_MAPPING" ] _tensorflow_arch_mappings = [ x for x in dir(transformers_module) if x.startswith("TF_MODEL_") and x.endswith("_MAPPING") ] pytorch_arch_mappings = [getattr(transformers_module, x) for x in _pytorch_arch_mappings] tensorflow_arch_mappings = [getattr(transformers_module, x) for x in _tensorflow_arch_mappings] config_classes = CONFIG_MAPPING.values() if not all: config_classes = [CONFIG_MAPPING[model_type] for model_type in model_types] processor_type_map = {c: get_processor_types_from_config_class(c) for c in config_classes} to_create = {} for c in config_classes: processors = processor_type_map[c] models = get_architectures_from_config_class(c, pytorch_arch_mappings, models_to_skip) tf_models = get_architectures_from_config_class(c, tensorflow_arch_mappings, models_to_skip) if len(models) + len(tf_models) > 0: to_create[c] = {"processor": processors, "pytorch": models, "tensorflow": tf_models} results = {} if num_workers <= 1: for c, models_to_create in list(to_create.items()): print(f"Create models for {c.__name__} ...") result = build(c, models_to_create, output_dir=os.path.join(output_path, c.model_type)) results[c.__name__] = result print("=" * 40) else: all_build_args = [] for c, models_to_create in list(to_create.items()): all_build_args.append((c, models_to_create, os.path.join(output_path, c.model_type))) with multiprocessing.Pool() as pool: results = pool.starmap(build, all_build_args) results = {buid_args[0].__name__: result for buid_args, result in zip(all_build_args, results)} if upload: if organization is None: raise ValueError("The argument `organization` could not be `None`. No model is uploaded") to_upload = [] for model_type in os.listdir(output_path): if model_type == "reports": continue for arch in os.listdir(os.path.join(output_path, model_type)): if arch == "processors": continue to_upload.append(os.path.join(output_path, model_type, arch)) to_upload = sorted(to_upload) upload_results = {} if len(to_upload) > 0: for model_dir in to_upload: try: upload_model(model_dir, organization, token) except Exception as e: error = f"Failed to upload {model_dir}. {e.__class__.__name__}: {e}" logger.error(error) upload_results[model_dir] = error with open(os.path.join(report_path, "failed_uploads.json"), "w") as fp: json.dump(upload_results, fp, indent=4) tiny_model_summary = build_tiny_model_summary(results, organization=organization, token=token) with open(os.path.join(report_path, "tiny_model_summary.json"), "w") as fp: json.dump(tiny_model_summary, fp, indent=4) with open(os.path.join(report_path, "tiny_model_creation_report.json"), "w") as fp: json.dump(results, fp, indent=4) failed_results = build_failed_report(results) with open(os.path.join(report_path, "failed_report.json"), "w") as fp: json.dump(failed_results, fp, indent=4) simple_report, failed_report = build_simple_report(results) with open(os.path.join(report_path, "simple_report.txt"), "w") as fp: fp.write(simple_report) with open(os.path.join(report_path, "simple_failed_report.txt"), "w") as fp: fp.write(failed_report) update_tiny_model_summary_file(report_path=os.path.join(output_path, "reports")) if __name__ == "__main__": multiprocessing.set_start_method("spawn") def list_str(values): return values.split(",") parser = argparse.ArgumentParser() parser.add_argument("--all", action="store_true", help="Will create all tiny models.") parser.add_argument( "--no_check", action="store_true", help="If set, will not check the validity of architectures. Use with caution.", ) parser.add_argument( "-m", "--model_types", type=list_str, help="Comma-separated list of model type(s) from which the tiny models will be created.", ) parser.add_argument( "--models_to_skip", type=list_str, help=( "Comma-separated list of model class names(s) from which the tiny models won't be created.\nThis is usually " "the list of model classes that have their tiny versions already uploaded to the Hub." ), ) parser.add_argument("--upload", action="store_true", help="If to upload the created tiny models to the Hub.") parser.add_argument( "--organization", default=None, type=str, help="The organization on the Hub to which the tiny models will be uploaded.", ) parser.add_argument( "--token", default=None, type=str, help="A valid authentication token for HuggingFace Hub with write access." ) parser.add_argument("output_path", type=Path, help="Path indicating where to store generated model.") parser.add_argument("--num_workers", default=1, type=int, help="The number of workers to run.") args = parser.parse_args() if not args.all and not args.model_types: raise ValueError("Please provide at least one model type or pass `--all` to export all architectures.") create_tiny_models( args.output_path, args.all, args.model_types, args.models_to_skip, args.no_check, args.upload, args.organization, args.token, args.num_workers, )
codingutf8 2021 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that sorts the imports in the custom inits of transformers transformers uses init files that delay the import of an object to when it s actually needed this is to avoid the main init importing all models which would make the line import transformers very slow when the user has all optional dependencies installed the inits with delayed imports have two halves one definining a dictionary importstructure which maps modules to the name of the objects in each module and one in typechecking which looks like a normal init for typecheckers isort or ruff properly sort the second half which looks like traditionl imports the goal of this script is to sort the first half use from the root of the repo with bash python utilscustominitisort py which will autosort the imports used in make style for a check only as used in make quality run bash python utilscustominitisort py checkonly path is defined with the intent you should run this script from the root of the repo pattern that looks at the indentation in a line pattern that matches key and puts key in group 0 pattern that matches importstructurekey and puts key in group 0 pattern that matches key and puts key in group 0 pattern that matches any stuff and puts stuff in group 0 returns the indent in given line as string search reindent searchline return if search is none else search groups0 def splitcodeinindentedblocks code str indentlevel str startprompt optionalstr none endprompt optionalstr none liststr let s split the code into lines and move to startindex index 0 lines code splitn if startprompt is not none while not linesindex startswithstartprompt index 1 blocks n joinlines index else blocks this variable contains the block treated at a given time currentblock linesindex index 1 we split into blocks until we get to the endprompt or the end of the file while index lenlines and endprompt is none or not linesindex startswithendprompt we have a nonempty line with the proper indent start of a new block if lenlinesindex 0 and getindentlinesindex indentlevel store the current block in the result and rest there are two cases the line is part of the block like a closing parenthesis or not if lencurrentblock 0 and getindentcurrentblock1 startswithindentlevel line is part of the current block currentblock appendlinesindex blocks appendn joincurrentblock if index lenlines 1 currentblock linesindex 1 index 1 else currentblock else line is not part of the current block blocks appendn joincurrentblock currentblock linesindex else just add the line to the current block currentblock appendlinesindex index 1 adds current block if it s nonempty if lencurrentblock 0 blocks appendn joincurrentblock add final block after endprompt if provided if endprompt is not none and index lenlines blocks appendn joinlinesindex return blocks def ignoreunderscoreandlowercasekey callableany str callableany str def innerx return keyx lower replace return inner def sortobjectsobjects listany key optionalcallableany str none listany if no key is provided we use a noop def noopx return x if key is none key noop constants are all uppercase they go first constants obj for obj in objects if keyobj isupper classes are not all uppercase but start with a capital they go second classes obj for obj in objects if keyobj0 isupper and not keyobj isupper functions begin with a lowercase they go last functions obj for obj in objects if not keyobj0 isupper then we sort each group key1 ignoreunderscoreandlowercasekey return sortedconstants keykey1 sortedclasses keykey1 sortedfunctions keykey1 def sortobjectsinimportimportstatement str str this inner function sort imports between def replacematch imports match groups0 if there is one import only nothing to do if not in imports return fimports keys part strip replace for part in imports split we will have a final empty element if the line finished with a comma if lenkeys1 0 keys keys 1 return joinf k for k in sortobjectskeys lines importstatement splitn if lenlines 3 here we have to sort internal imports that are on several lines one per name key object1 object2 we may have to ignore one or two lines on each side idx 2 if lines1 strip else 1 keystosort i restripline searchline groups0 for i line in enumeratelinesidx idx sortedindices sortobjectskeystosort keylambda x x1 sortedlines linesx0 idx for x in sortedindices return n joinlines idx sortedlines linesidx elif lenlines 3 here we have to sort internal imports that are on one separate line key object1 object2 if rebracketcontent searchlines1 is not none lines1 rebracketcontent subreplace lines1 else keys part strip replace for part in lines1 split we will have a final empty element if the line finished with a comma if lenkeys1 0 keys keys 1 lines1 getindentlines1 joinf k for k in sortobjectskeys return n joinlines else finally we have to deal with imports fitting on one line importstatement rebracketcontent subreplace importstatement return importstatement def sortimportsfile str checkonly bool true with openfile encodingutf8 as f code f read if the file is not a custom init there is nothing to do if importstructure not in code return blocks of indent level 0 mainblocks splitcodeinindentedblocks code startpromptimportstructure endpromptif typechecking we ignore block 0 everything untils startprompt and the last block everything after endprompt for blockidx in range1 lenmainblocks 1 check if the block contains some importstructures thingy to sort block mainblocksblockidx blocklines block splitn get to the start of the imports lineidx 0 while lineidx lenblocklines and importstructure not in blocklineslineidx skip dummy import blocks if import dummy in blocklineslineidx lineidx lenblocklines else lineidx 1 if lineidx lenblocklines continue ignore beginning and last line they don t contain anything internalblockcode n joinblocklineslineidx 1 indent getindentblocklines1 slit the internal block into blocks of indent level 1 internalblocks splitcodeinindentedblocksinternalblockcode indentlevelindent we have two categories of import key list or importstructurekey appendextend pattern redirectkey if importstructure in blocklines0 else reindirectkey grab the keys but there is a trap some lines are empty or just comments keys pattern searchb groups0 if pattern searchb is not none else none for b in internalblocks we only sort the lines with a key keystosort i key for i key in enumeratekeys if key is not none sortedindices x0 for x in sortedkeystosort keylambda x x1 we reorder the blocks by leaving empty linescomments as they were and reorder the rest count 0 reorderdedblocks for i in rangeleninternalblocks if keysi is none reorderdedblocks appendinternalblocksi else block sortobjectsinimportinternalblockssortedindicescount reorderdedblocks appendblock count 1 and we put our main block back together with its first and last line mainblocksblockidx n joinblocklines lineidx reorderdedblocks blocklines1 if code n joinmainblocks if checkonly return true else printfoverwriting file with openfile w encodingutf8 as f f writen joinmainblocks def sortimportsinallinitscheckonlytrue failures for root files in os walkpathtotransformers if init py in files result sortimportsos path joinroot init py checkonlycheckonly if result failures os path joinroot init py if lenfailures 0 raise valueerrorfwould overwrite lenfailures files run make style if name main parser argparse argumentparser parser addargumentcheckonly actionstoretrue helpwhether to only check or fix style args parser parseargs sortimportsinallinitscheckonlyargs checkonly coding utf 8 2021 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that sorts the imports in the custom inits of transformers transformers uses init files that delay the import of an object to when it s actually needed this is to avoid the main init importing all models which would make the line import transformers very slow when the user has all optional dependencies installed the inits with delayed imports have two halves one definining a dictionary _import_structure which maps modules to the name of the objects in each module and one in type_checking which looks like a normal init for type checkers isort or ruff properly sort the second half which looks like traditionl imports the goal of this script is to sort the first half use from the root of the repo with bash python utils custom_init_isort py which will auto sort the imports used in make style for a check only as used in make quality run bash python utils custom_init_isort py check_only path is defined with the intent you should run this script from the root of the repo pattern that looks at the indentation in a line pattern that matches key and puts key in group 0 pattern that matches _import_structure key and puts key in group 0 pattern that matches key and puts key in group 0 pattern that matches any stuff and puts stuff in group 0 returns the indent in given line as string split some code into its indented blocks starting at a given level args code str the code to split indent_level str the indent level as string to use for identifying the blocks to split start_prompt str optional if provided only starts splitting at the line where this text is end_prompt str optional if provided stops splitting at a line where this text is warning the text before start_prompt or after end_prompt if provided is not ignored just not split the input code can thus be retrieved by joining the result returns list str the list of blocks let s split the code into lines and move to start_index this variable contains the block treated at a given time we split into blocks until we get to the end_prompt or the end of the file we have a non empty line with the proper indent start of a new block store the current block in the result and rest there are two cases the line is part of the block like a closing parenthesis or not line is part of the current block line is not part of the current block just add the line to the current block adds current block if it s nonempty add final block after end_prompt if provided wraps a key function as used in a sort to lowercase and ignore underscores sort a list of objects following the rules of isort all uppercased first camel cased second and lower cased last args objects list any the list of objects to sort key callable any str optional a function taking an object as input and returning a string used to sort them by alphabetical order if not provided will default to noop so a key must be provided if the objects are not of type string returns list any the sorted list with the same elements as in the inputs if no key is provided we use a noop constants are all uppercase they go first classes are not all uppercase but start with a capital they go second functions begin with a lowercase they go last then we sort each group sorts the imports in a single import statement args import_statement str the import statement in which to sort the imports returns str the same as the input but with objects properly sorted this inner function sort imports between if there is one import only nothing to do we will have a final empty element if the line finished with a comma here we have to sort internal imports that are on several lines one per name key object1 object2 we may have to ignore one or two lines on each side here we have to sort internal imports that are on one separate line key object1 object2 we will have a final empty element if the line finished with a comma finally we have to deal with imports fitting on one line sort the imports defined in the _import_structure of a given init args file str the path to the init to check fix check_only bool optional defaults to true whether or not to just check and not auto fix the init if the file is not a custom init there is nothing to do blocks of indent level 0 we ignore block 0 everything untils start_prompt and the last block everything after end_prompt check if the block contains some _import_structure s thingy to sort get to the start of the imports skip dummy import blocks ignore beginning and last line they don t contain anything slit the internal block into blocks of indent level 1 we have two categories of import key list or _import_structure key append extend grab the keys but there is a trap some lines are empty or just comments we only sort the lines with a key we reorder the blocks by leaving empty lines comments as they were and reorder the rest and we put our main block back together with its first and last line sort the imports defined in the _import_structure of all inits in the repo args check_only bool optional defaults to true whether or not to just check and not auto fix the init
import argparse import os import re from typing import Any, Callable, List, Optional PATH_TO_TRANSFORMERS = "src/transformers" _re_indent = re.compile(r"^(\s*)\S") _re_direct_key = re.compile(r'^\s*"([^"]+)":') _re_indirect_key = re.compile(r'^\s*_import_structure\["([^"]+)"\]') _re_strip_line = re.compile(r'^\s*"([^"]+)",\s*$') _re_bracket_content = re.compile(r"\[([^\]]+)\]") def get_indent(line: str) -> str: search = _re_indent.search(line) return "" if search is None else search.groups()[0] def split_code_in_indented_blocks( code: str, indent_level: str = "", start_prompt: Optional[str] = None, end_prompt: Optional[str] = None ) -> List[str]: index = 0 lines = code.split("\n") if start_prompt is not None: while not lines[index].startswith(start_prompt): index += 1 blocks = ["\n".join(lines[:index])] else: blocks = [] current_block = [lines[index]] index += 1 while index < len(lines) and (end_prompt is None or not lines[index].startswith(end_prompt)): if len(lines[index]) > 0 and get_indent(lines[index]) == indent_level: if len(current_block) > 0 and get_indent(current_block[-1]).startswith(indent_level + " "): current_block.append(lines[index]) blocks.append("\n".join(current_block)) if index < len(lines) - 1: current_block = [lines[index + 1]] index += 1 else: current_block = [] else: blocks.append("\n".join(current_block)) current_block = [lines[index]] else: current_block.append(lines[index]) index += 1 if len(current_block) > 0: blocks.append("\n".join(current_block)) if end_prompt is not None and index < len(lines): blocks.append("\n".join(lines[index:])) return blocks def ignore_underscore_and_lowercase(key: Callable[[Any], str]) -> Callable[[Any], str]: def _inner(x): return key(x).lower().replace("_", "") return _inner def sort_objects(objects: List[Any], key: Optional[Callable[[Any], str]] = None) -> List[Any]: def noop(x): return x if key is None: key = noop constants = [obj for obj in objects if key(obj).isupper()] classes = [obj for obj in objects if key(obj)[0].isupper() and not key(obj).isupper()] functions = [obj for obj in objects if not key(obj)[0].isupper()] key1 = ignore_underscore_and_lowercase(key) return sorted(constants, key=key1) + sorted(classes, key=key1) + sorted(functions, key=key1) def sort_objects_in_import(import_statement: str) -> str: def _replace(match): imports = match.groups()[0] if "," not in imports: return f"[{imports}]" keys = [part.strip().replace('"', "") for part in imports.split(",")] if len(keys[-1]) == 0: keys = keys[:-1] return "[" + ", ".join([f'"{k}"' for k in sort_objects(keys)]) + "]" lines = import_statement.split("\n") if len(lines) > 3: idx = 2 if lines[1].strip() == "[" else 1 keys_to_sort = [(i, _re_strip_line.search(line).groups()[0]) for i, line in enumerate(lines[idx:-idx])] sorted_indices = sort_objects(keys_to_sort, key=lambda x: x[1]) sorted_lines = [lines[x[0] + idx] for x in sorted_indices] return "\n".join(lines[:idx] + sorted_lines + lines[-idx:]) elif len(lines) == 3: if _re_bracket_content.search(lines[1]) is not None: lines[1] = _re_bracket_content.sub(_replace, lines[1]) else: keys = [part.strip().replace('"', "") for part in lines[1].split(",")] if len(keys[-1]) == 0: keys = keys[:-1] lines[1] = get_indent(lines[1]) + ", ".join([f'"{k}"' for k in sort_objects(keys)]) return "\n".join(lines) else: import_statement = _re_bracket_content.sub(_replace, import_statement) return import_statement def sort_imports(file: str, check_only: bool = True): with open(file, encoding="utf-8") as f: code = f.read() if "_import_structure" not in code: return main_blocks = split_code_in_indented_blocks( code, start_prompt="_import_structure = {", end_prompt="if TYPE_CHECKING:" ) for block_idx in range(1, len(main_blocks) - 1): block = main_blocks[block_idx] block_lines = block.split("\n") line_idx = 0 while line_idx < len(block_lines) and "_import_structure" not in block_lines[line_idx]: if "import dummy" in block_lines[line_idx]: line_idx = len(block_lines) else: line_idx += 1 if line_idx >= len(block_lines): continue internal_block_code = "\n".join(block_lines[line_idx:-1]) indent = get_indent(block_lines[1]) internal_blocks = split_code_in_indented_blocks(internal_block_code, indent_level=indent) pattern = _re_direct_key if "_import_structure = {" in block_lines[0] else _re_indirect_key keys = [(pattern.search(b).groups()[0] if pattern.search(b) is not None else None) for b in internal_blocks] keys_to_sort = [(i, key) for i, key in enumerate(keys) if key is not None] sorted_indices = [x[0] for x in sorted(keys_to_sort, key=lambda x: x[1])] count = 0 reorderded_blocks = [] for i in range(len(internal_blocks)): if keys[i] is None: reorderded_blocks.append(internal_blocks[i]) else: block = sort_objects_in_import(internal_blocks[sorted_indices[count]]) reorderded_blocks.append(block) count += 1 main_blocks[block_idx] = "\n".join(block_lines[:line_idx] + reorderded_blocks + [block_lines[-1]]) if code != "\n".join(main_blocks): if check_only: return True else: print(f"Overwriting {file}.") with open(file, "w", encoding="utf-8") as f: f.write("\n".join(main_blocks)) def sort_imports_in_all_inits(check_only=True): failures = [] for root, _, files in os.walk(PATH_TO_TRANSFORMERS): if "__init__.py" in files: result = sort_imports(os.path.join(root, "__init__.py"), check_only=check_only) if result: failures = [os.path.join(root, "__init__.py")] if len(failures) > 0: raise ValueError(f"Would overwrite {len(failures)} files, run `make style`.") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--check_only", action="store_true", help="Whether to only check or fix style.") args = parser.parse_args() sort_imports_in_all_inits(check_only=args.check_only)
extract warnings from a downloaded artifact in zip format selectedwarnings set buffer def parselinefp for line in fp if isinstanceline bytes line line decodeutf8 if warnings summary final in line continue this means we are outside the body of a warning elif not line startswith process a single warning and move it to selectedwarnings if lenbuffer 0 warning n joinbuffer only keep the warnings specified in targets if anyf x in warning for x in targets selectedwarnings addwarning buffer clear continue else line line strip buffer appendline if fromgh for filename in os listdirartifactpath filepath os path joinartifactpath filename if not os path isdirfilepath read the file if filename warnings txt continue with openfilepath as fp parselinefp else try with zipfile zipfileartifactpath as z for filename in z namelist if not os path isdirfilename read the file if filename warnings txt continue with z openfilename as fp parselinefp except exception logger warning fartifactpath is either an invalid zip file or something else wrong this file is skipped return selectedwarnings def extractwarningsartifactdir targets required parameters optional parameters the artifacts have to be downloaded using actionsdownloadartifactv3 get download links download artifacts be gentle to github extract warnings from artifacts extract warnings from a downloaded artifact in zip format this means we are outside the body of a warning process a single warning and move it to selected_warnings only keep the warnings specified in targets read the file read the file extract warnings from all artifact files required parameters optional parameters the artifacts have to be downloaded using actions download artifact v3 get download links download artifacts be gentle to github extract warnings from artifacts
import argparse import json import os import time import zipfile from get_ci_error_statistics import download_artifact, get_artifacts_links from transformers import logging logger = logging.get_logger(__name__) def extract_warnings_from_single_artifact(artifact_path, targets): selected_warnings = set() buffer = [] def parse_line(fp): for line in fp: if isinstance(line, bytes): line = line.decode("UTF-8") if "warnings summary (final)" in line: continue elif not line.startswith(" "): if len(buffer) > 0: warning = "\n".join(buffer) if any(f": {x}: " in warning for x in targets): selected_warnings.add(warning) buffer.clear() continue else: line = line.strip() buffer.append(line) if from_gh: for filename in os.listdir(artifact_path): file_path = os.path.join(artifact_path, filename) if not os.path.isdir(file_path): if filename != "warnings.txt": continue with open(file_path) as fp: parse_line(fp) else: try: with zipfile.ZipFile(artifact_path) as z: for filename in z.namelist(): if not os.path.isdir(filename): if filename != "warnings.txt": continue with z.open(filename) as fp: parse_line(fp) except Exception: logger.warning( f"{artifact_path} is either an invalid zip file or something else wrong. This file is skipped." ) return selected_warnings def extract_warnings(artifact_dir, targets): selected_warnings = set() paths = [os.path.join(artifact_dir, p) for p in os.listdir(artifact_dir) if (p.endswith(".zip") or from_gh)] for p in paths: selected_warnings.update(extract_warnings_from_single_artifact(p, targets)) return selected_warnings if __name__ == "__main__": def list_str(values): return values.split(",") parser = argparse.ArgumentParser() parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.") parser.add_argument( "--output_dir", type=str, required=True, help="Where to store the downloaded artifacts and other result files.", ) parser.add_argument("--token", default=None, type=str, help="A token that has actions:read permission.") parser.add_argument( "--targets", default="DeprecationWarning,UserWarning,FutureWarning", type=list_str, help="Comma-separated list of target warning(s) which we want to extract.", ) parser.add_argument( "--from_gh", action="store_true", help="If running from a GitHub action workflow and collecting warnings from its artifacts.", ) args = parser.parse_args() from_gh = args.from_gh if from_gh: pass else: os.makedirs(args.output_dir, exist_ok=True) artifacts = get_artifacts_links(args.workflow_run_id, token=args.token) with open(os.path.join(args.output_dir, "artifacts.json"), "w", encoding="UTF-8") as fp: json.dump(artifacts, fp, ensure_ascii=False, indent=4) for idx, (name, url) in enumerate(artifacts.items()): print(name) print(url) print("=" * 80) download_artifact(name, url, args.output_dir, args.token) time.sleep(1) selected_warnings = extract_warnings(args.output_dir, args.targets) selected_warnings = sorted(selected_warnings) with open(os.path.join(args.output_dir, "selected_warnings.json"), "w", encoding="UTF-8") as fp: json.dump(selected_warnings, fp, ensure_ascii=False, indent=4)
extract job names and their job links in a github actions workflow run headers none if token is not none headers accept applicationvnd githubjson ization fbearer token url fhttps api github comreposhuggingfacetransformersactionsrunsworkflowrunidjobs perpage100 result requests geturl headersheaders json joblinks try joblinks updatejobname jobhtmlurl for job in resultjobs pagestoiterateover math ceilresulttotalcount 100 100 for i in rangepagestoiterateover result requests geturl fpagei 2 headersheaders json joblinks updatejobname jobhtmlurl for job in resultjobs return joblinks except exception printfunknown error could not fetch links ntraceback formatexc return def getartifactslinksworflowrunid tokennone download a github action artifact from a url the url is of the form https api github comreposhuggingfacetransformersactionsartifactsartifactidzip but it can t be used to download directly we need to get a redirect url first see https docs github comenrestactionsartifactsdownloadanartifact extract errors from a downloaded artifact in zip format errors failedtests jobname none with zipfile zipfileartifactzippath as z for filename in z namelist if not os path isdirfilename read the file if filename in failuresline txt summaryshort txt jobname txt with z openfilename as f for line in f line line decodeutf8 strip if filename failuresline txt try errorline is the place where error occurs errorline line line index error lineline index len errors appenderrorline error except exception skip unrelated lines pass elif filename summaryshort txt and line startswithfailed test is the test method that failed test linelenfailed failedtests appendtest elif filename jobname txt jobname line if lenerrors lenfailedtests raise valueerror ferrors and failedtests should have the same number of elements got lenerrors for errors fand lenfailedtests for failedtests instead the test reports in artifactzippath have some problem joblink none if jobname and joblinks joblink joblinks getjobname none a list with elements of the form line of error error failed test result x y joblink for x y in ziperrors failedtests return result def getallerrorsartifactdir joblinksnone count each error counter counter counter updatex1 for x in logs counts counter mostcommon r for error count in counts if errorfilter is none or error not in errorfilter rerror count count failedtests x2 x0 for x in logs if x1 error r dictsortedr items keylambda item item1count reversetrue return r def getmodeltest count each error per model logs x0 x1 getmodelx2 for x in logs logs x for x in logs if x2 is not none tests x2 for x in logs r for test in tests counter counter count by errors in test counter updatex1 for x in logs if x2 test counts counter mostcommon errorcounts error count for error count in counts if errorfilter is none or error not in errorfilter nerrors sumerrorcounts values if nerrors 0 rtest count nerrors errors errorcounts r dictsortedr items keylambda item item1count reversetrue return r def makegithubtablereducedbyerror header no error status sep lines header sep for error in reducedbyerror count reducedbyerrorerrorcount line f count error 100 lines appendline return n joinlines def makegithubtablepermodelreducedbymodel header model no of errors major error count sep lines header sep for model in reducedbymodel count reducedbymodelmodelcount error count listreducedbymodelmodelerrors items0 line f model count error 60 count lines appendline return n joinlines if name main parser argparse argumentparser required parameters parser addargumentworkflowrunid typestr requiredtrue helpa github actions workflow run id parser addargument outputdir typestr requiredtrue helpwhere to store the downloaded artifacts and other result files parser addargumenttoken defaultnone typestr helpa token that has actions read permission args parser parseargs os makedirsargs outputdir existoktrue joblinks getjoblinksargs workflowrunid tokenargs token joblinks to deal with workflowcall event where a job name is the combination of the job names in the caller and callee for example pytorch 1 11 model tests modelsalbert singlegpu if joblinks for k v in joblinks items this is how github actions combine job names if in k index k find k kindex len joblinksk v with openos path joinargs outputdir joblinks json w encodingutf8 as fp json dumpjoblinks fp ensureasciifalse indent4 artifacts getartifactslinksargs workflowrunid tokenargs token with openos path joinargs outputdir artifacts json w encodingutf8 as fp json dumpartifacts fp ensureasciifalse indent4 for idx name url in enumerateartifacts items downloadartifactname url args outputdir args token be gentle to github time sleep1 errors getallerrorsargs outputdir joblinksjoblinks e1 is the error counter counter counter updatee1 for e in errors print the top 30 most common test errors mostcommon counter mostcommon30 for item in mostcommon printitem with openos path joinargs outputdir errors json w encodingutf8 as fp json dumperrors fp ensureasciifalse indent4 reducedbyerror reducebyerrorerrors reducedbymodel reducebymodelerrors s1 makegithubtablereducedbyerror s2 makegithubtablepermodelreducedbymodel with openos path joinargs outputdir reducedbyerror txt w encodingutf8 as fp fp writes1 with openos path joinargs outputdir reducedbymodel txt w encodingutf8 as fp fp writes2 extract job names and their job links in a github actions workflow run get all artifact links from a workflow run download a github action artifact from a url the url is of the form https api github com repos huggingface transformers actions artifacts artifact_id zip but it can t be used to download directly we need to get a redirect url first see https docs github com en rest actions artifacts download an artifact extract errors from a downloaded artifact in zip format read the file error_line is the place where error occurs skip un related lines test is the test method that failed a list with elements of the form line of error error failed test extract errors from all artifact files count each error get the model name from a test method count each error per model count by errors in test required parameters to deal with workflow_call event where a job name is the combination of the job names in the caller and callee for example pytorch 1 11 model tests models albert single gpu this is how github actions combine job names be gentle to github e 1 is the error print the top 30 most common test errors
import argparse import json import math import os import time import traceback import zipfile from collections import Counter import requests def get_job_links(workflow_run_id, token=None): headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100" result = requests.get(url, headers=headers).json() job_links = {} try: job_links.update({job["name"]: job["html_url"] for job in result["jobs"]}) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}", headers=headers).json() job_links.update({job["name"]: job["html_url"] for job in result["jobs"]}) return job_links except Exception: print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}") return {} def get_artifacts_links(worflow_run_id, token=None): headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{worflow_run_id}/artifacts?per_page=100" result = requests.get(url, headers=headers).json() artifacts = {} try: artifacts.update({artifact["name"]: artifact["archive_download_url"] for artifact in result["artifacts"]}) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}", headers=headers).json() artifacts.update({artifact["name"]: artifact["archive_download_url"] for artifact in result["artifacts"]}) return artifacts except Exception: print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}") return {} def download_artifact(artifact_name, artifact_url, output_dir, token): headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} result = requests.get(artifact_url, headers=headers, allow_redirects=False) download_url = result.headers["Location"] response = requests.get(download_url, allow_redirects=True) file_path = os.path.join(output_dir, f"{artifact_name}.zip") with open(file_path, "wb") as fp: fp.write(response.content) def get_errors_from_single_artifact(artifact_zip_path, job_links=None): errors = [] failed_tests = [] job_name = None with zipfile.ZipFile(artifact_zip_path) as z: for filename in z.namelist(): if not os.path.isdir(filename): if filename in ["failures_line.txt", "summary_short.txt", "job_name.txt"]: with z.open(filename) as f: for line in f: line = line.decode("UTF-8").strip() if filename == "failures_line.txt": try: error_line = line[: line.index(": ")] error = line[line.index(": ") + len(": ") :] errors.append([error_line, error]) except Exception: pass elif filename == "summary_short.txt" and line.startswith("FAILED "): test = line[len("FAILED ") :] failed_tests.append(test) elif filename == "job_name.txt": job_name = line if len(errors) != len(failed_tests): raise ValueError( f"`errors` and `failed_tests` should have the same number of elements. Got {len(errors)} for `errors` " f"and {len(failed_tests)} for `failed_tests` instead. The test reports in {artifact_zip_path} have some" " problem." ) job_link = None if job_name and job_links: job_link = job_links.get(job_name, None) result = [x + [y] + [job_link] for x, y in zip(errors, failed_tests)] return result def get_all_errors(artifact_dir, job_links=None): errors = [] paths = [os.path.join(artifact_dir, p) for p in os.listdir(artifact_dir) if p.endswith(".zip")] for p in paths: errors.extend(get_errors_from_single_artifact(p, job_links=job_links)) return errors def reduce_by_error(logs, error_filter=None): counter = Counter() counter.update([x[1] for x in logs]) counts = counter.most_common() r = {} for error, count in counts: if error_filter is None or error not in error_filter: r[error] = {"count": count, "failed_tests": [(x[2], x[0]) for x in logs if x[1] == error]} r = dict(sorted(r.items(), key=lambda item: item[1]["count"], reverse=True)) return r def get_model(test): test = test.split("::")[0] if test.startswith("tests/models/"): test = test.split("/")[2] else: test = None return test def reduce_by_model(logs, error_filter=None): logs = [(x[0], x[1], get_model(x[2])) for x in logs] logs = [x for x in logs if x[2] is not None] tests = {x[2] for x in logs} r = {} for test in tests: counter = Counter() counter.update([x[1] for x in logs if x[2] == test]) counts = counter.most_common() error_counts = {error: count for error, count in counts if (error_filter is None or error not in error_filter)} n_errors = sum(error_counts.values()) if n_errors > 0: r[test] = {"count": n_errors, "errors": error_counts} r = dict(sorted(r.items(), key=lambda item: item[1]["count"], reverse=True)) return r def make_github_table(reduced_by_error): header = "| no. | error | status |" sep = "|-:|:-|:-|" lines = [header, sep] for error in reduced_by_error: count = reduced_by_error[error]["count"] line = f"| {count} | {error[:100]} | |" lines.append(line) return "\n".join(lines) def make_github_table_per_model(reduced_by_model): header = "| model | no. of errors | major error | count |" sep = "|-:|-:|-:|-:|" lines = [header, sep] for model in reduced_by_model: count = reduced_by_model[model]["count"] error, _count = list(reduced_by_model[model]["errors"].items())[0] line = f"| {model} | {count} | {error[:60]} | {_count} |" lines.append(line) return "\n".join(lines) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.") parser.add_argument( "--output_dir", type=str, required=True, help="Where to store the downloaded artifacts and other result files.", ) parser.add_argument("--token", default=None, type=str, help="A token that has actions:read permission.") args = parser.parse_args() os.makedirs(args.output_dir, exist_ok=True) _job_links = get_job_links(args.workflow_run_id, token=args.token) job_links = {} if _job_links: for k, v in _job_links.items(): if " / " in k: index = k.find(" / ") k = k[index + len(" / ") :] job_links[k] = v with open(os.path.join(args.output_dir, "job_links.json"), "w", encoding="UTF-8") as fp: json.dump(job_links, fp, ensure_ascii=False, indent=4) artifacts = get_artifacts_links(args.workflow_run_id, token=args.token) with open(os.path.join(args.output_dir, "artifacts.json"), "w", encoding="UTF-8") as fp: json.dump(artifacts, fp, ensure_ascii=False, indent=4) for idx, (name, url) in enumerate(artifacts.items()): download_artifact(name, url, args.output_dir, args.token) time.sleep(1) errors = get_all_errors(args.output_dir, job_links=job_links) counter = Counter() counter.update([e[1] for e in errors]) most_common = counter.most_common(30) for item in most_common: print(item) with open(os.path.join(args.output_dir, "errors.json"), "w", encoding="UTF-8") as fp: json.dump(errors, fp, ensure_ascii=False, indent=4) reduced_by_error = reduce_by_error(errors) reduced_by_model = reduce_by_model(errors) s1 = make_github_table(reduced_by_error) s2 = make_github_table_per_model(reduced_by_model) with open(os.path.join(args.output_dir, "reduced_by_error.txt"), "w", encoding="UTF-8") as fp: fp.write(s1) with open(os.path.join(args.output_dir, "reduced_by_model.txt"), "w", encoding="UTF-8") as fp: fp.write(s2)
extract time info from a single job in a github actions workflow run jobinfo start jobstartedat end jobcompletedat startdatetime dateparser parsestart enddatetime dateparser parseend durationinmin roundenddatetime startdatetime totalseconds 60 0 jobinfostartedat start jobinfocompletedat end jobinfoduration durationinmin return jobinfo def getjobtimeworkflowrunid tokennone parser argparse argumentparser required parameters parser addargumentworkflowrunid typestr requiredtrue helpa github actions workflow run id args parser parseargs jobtime getjobtimeargs workflowrunid jobtime dictsortedjobtime items keylambda item item1duration reversetrue for k v in jobtime items printf k vduration extract time info from a single job in a github actions workflow run extract time info for all jobs in a github actions workflow run example python get_github_job_time py workflow_run_id 2945609517 required parameters
import argparse import math import traceback import dateutil.parser as date_parser import requests def extract_time_from_single_job(job): job_info = {} start = job["started_at"] end = job["completed_at"] start_datetime = date_parser.parse(start) end_datetime = date_parser.parse(end) duration_in_min = round((end_datetime - start_datetime).total_seconds() / 60.0) job_info["started_at"] = start job_info["completed_at"] = end job_info["duration"] = duration_in_min return job_info def get_job_time(workflow_run_id, token=None): headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{workflow_run_id}/jobs?per_page=100" result = requests.get(url, headers=headers).json() job_time = {} try: job_time.update({job["name"]: extract_time_from_single_job(job) for job in result["jobs"]}) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}", headers=headers).json() job_time.update({job["name"]: extract_time_from_single_job(job) for job in result["jobs"]}) return job_time except Exception: print(f"Unknown error, could not fetch links:\n{traceback.format_exc()}") return {} if __name__ == "__main__": r parser = argparse.ArgumentParser() parser.add_argument("--workflow_run_id", type=str, required=True, help="A GitHub Actions workflow run id.") args = parser.parse_args() job_time = get_job_time(args.workflow_run_id) job_time = dict(sorted(job_time.items(), key=lambda item: item[1]["duration"], reverse=True)) for k, v in job_time.items(): print(f'{k}: {v["duration"]}')
codingutf8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this script reports modified py files under the desired list of toplevel subdirs passed as a list of arguments e g python utilsgetmodifiedfiles py utils src tests examples it uses git to find the forking point and which files were modified i e files not under git won t be considered since the output of this script is fed into makefile commands it doesn t print a newline after the results coding utf 8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this script reports modified py files under the desired list of top level sub dirs passed as a list of arguments e g python utils get_modified_files py utils src tests examples it uses git to find the forking point and which files were modified i e files not under git won t be considered since the output of this script is fed into makefile commands it doesn t print a newline after the results
import re import subprocess import sys fork_point_sha = subprocess.check_output("git merge-base main HEAD".split()).decode("utf-8") modified_files = ( subprocess.check_output(f"git diff --diff-filter=d --name-only {fork_point_sha}".split()).decode("utf-8").split() ) joined_dirs = "|".join(sys.argv[1:]) regex = re.compile(rf"^({joined_dirs}).*?\.py$") relevant_modified_files = [x for x in modified_files if regex.match(x)] print(" ".join(relevant_modified_files), end="")
get the workflow runs of the scheduled daily ci this only selects the runs triggered by the schedule event on the main branch the id of a workflow not of a workflow run on main branch event being schedule not returning prs only numruns results get the last completed workflow run id of the scheduled daily ci workflowruns getdailycirunstoken workflowrunid none for workflowrun in workflowruns if workflowrunstatus completed workflowrunid workflowrunid break return workflowrunid def getlastdailyciartifactsartifactnames outputdir token get the artifacts content of the last completed workflow run id of the scheduled daily ci getlastdailyciartifactsartifactnames outputdir token results for artifactname in artifactnames artifactzippath os path joinoutputdir fartifactname zip if os path isfileartifactzippath resultsartifactname with zipfile zipfileartifactzippath as z for filename in z namelist if not os path isdirfilename read the file with z openfilename as f resultsartifactnamefilename f read decodeutf8 return results get the workflow runs of the scheduled daily ci this only selects the runs triggered by the schedule event on the main branch the id of a workflow not of a workflow run on main branch event being schedule not returning prs only num_runs results get the last completed workflow run id of the scheduled daily ci get the artifacts of last completed workflow run id of the scheduled daily ci get the artifacts content of the last completed workflow run id of the scheduled daily ci read the file
import os import zipfile import requests from get_ci_error_statistics import download_artifact, get_artifacts_links def get_daily_ci_runs(token, num_runs=7): headers = None if token is not None: headers = {"Accept": "application/vnd.github+json", "Authorization": f"Bearer {token}"} workflow_id = "636036" url = f"https://api.github.com/repos/huggingface/transformers/actions/workflows/{workflow_id}/runs" url += f"?branch=main&event=schedule&exclude_pull_requests=true&per_page={num_runs}" result = requests.get(url, headers=headers).json() return result["workflow_runs"] def get_last_daily_ci_runs(token): workflow_runs = get_daily_ci_runs(token) workflow_run_id = None for workflow_run in workflow_runs: if workflow_run["status"] == "completed": workflow_run_id = workflow_run["id"] break return workflow_run_id def get_last_daily_ci_artifacts(artifact_names, output_dir, token): workflow_run_id = get_last_daily_ci_runs(token) if workflow_run_id is not None: artifacts_links = get_artifacts_links(worflow_run_id=workflow_run_id, token=token) for artifact_name in artifact_names: if artifact_name in artifacts_links: artifact_url = artifacts_links[artifact_name] download_artifact( artifact_name=artifact_name, artifact_url=artifact_url, output_dir=output_dir, token=token ) def get_last_daily_ci_reports(artifact_names, output_dir, token): get_last_daily_ci_artifacts(artifact_names, output_dir, token) results = {} for artifact_name in artifact_names: artifact_zip_path = os.path.join(output_dir, f"{artifact_name}.zip") if os.path.isfile(artifact_zip_path): results[artifact_name] = {} with zipfile.ZipFile(artifact_zip_path) as z: for filename in z.namelist(): if not os.path.isdir(filename): with z.open(filename) as f: results[artifact_name][filename] = f.read().decode("UTF-8") return results
codingutf8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this is required to make the module import works when the python process is running from the root of the repo def getmodulepathtestfile get the module of a model test file testmodulepath getmodulepathtestfile testmodule importlib importmoduletestmodulepath return testmodule def gettesterclassestestfile sort with class names get all test classes in a model test file with attribute allmodelclasses that are nonempty these are usually the model test classes containing the nonslow tests to run and are subclasses of one of the classes modeltestermixin tfmodeltestermixin or flaxmodeltestermixin as well as a subclass of unittest testcase exceptions include ragtestmixin and its subclasses tfflaxmodeltestermixin is also an attribute in specific model test module let s exclude them by checking allmodelclasses is not empty which also excludes other special classes sort with class names get all model classes that appear in allmodelclasses attributes in a model test file testclasses gettestclassestestfile modelclasses set for testclass in testclasses modelclasses updatetestclass allmodelclasses sort with class names return sortedmodelclasses keylambda x x name def getmodeltesterfromtestclasstestclass tfflaxmodeltestermixin has this attribute default to none let s skip this case get all test classes in testfile that have modelclass in their allmodelclasses testclasses gettestclassestestfile targettestclasses for testclass in testclasses if modelclass in testclass allmodelclasses targettestclasses appendtestclass sort with class names return sortedtargettestclasses keylambda x x name def gettesterclassesformodeltestfile modelclass sort with class names get a mapping from test classes to model tester classes in testfile this uses gettestclasses which may return classes that are not subclasses of unittest testcase get a mapping from model classes to test classes in testfile modelclasses getmodelclassestestfile modeltestmapping modelclass gettestclassesformodeltestfile modelclass for modelclass in modelclasses return modeltestmapping def getmodeltotestermappingtestfile make the information succinct and easy to read avoid the full class representation like class transformers models bert modelingbert bertformaskedlm when displaying the results instead we use class name bertformaskedlm for the readability coding utf 8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this is required to make the module import works when the python process is running from the root of the repo the argument test_file in this file refers to a model test file this should be a string of the from tests models test_modeling_ py return the module path of a model test file get the module of a model test file get all classes in a model test file whose names ends with modeltester sort with class names get all test classes in a model test file with attribute all_model_classes that are non empty these are usually the model test classes containing the non slow tests to run and are subclasses of one of the classes modeltestermixin tfmodeltestermixin or flaxmodeltestermixin as well as a subclass of unittest testcase exceptions include ragtestmixin and its subclasses tf flax modeltestermixin is also an attribute in specific model test module let s exclude them by checking all_model_classes is not empty which also excludes other special classes sort with class names get all model classes that appear in all_model_classes attributes in a model test file sort with class names get the model tester class of a model test class tf flax modeltestermixin has this attribute default to none let s skip this case get all test classes in test_file that have model_class in their all_model_classes sort with class names get all model tester classes in test_file that are associated to model_class sort with class names get a mapping from test classes to model tester classes in test_file this uses get_test_classes which may return classes that are not subclasses of unittest testcase get a mapping from model classes to test classes in test_file get a mapping from model classes to model tester classes in test_file make the information succinct and easy to read avoid the full class representation like class transformers models bert modeling_bert bertformaskedlm when displaying the results instead we use class name bertformaskedlm for the readability
import importlib import os import sys sys.path.append(".") r def get_module_path(test_file): components = test_file.split(os.path.sep) if components[0:2] != ["tests", "models"]: raise ValueError( "`test_file` should start with `tests/models/` (with `/` being the OS specific path separator). Got " f"{test_file} instead." ) test_fn = components[-1] if not test_fn.endswith("py"): raise ValueError(f"`test_file` should be a python file. Got {test_fn} instead.") if not test_fn.startswith("test_modeling_"): raise ValueError( f"`test_file` should point to a file name of the form `test_modeling_*.py`. Got {test_fn} instead." ) components = components[:-1] + [test_fn.replace(".py", "")] test_module_path = ".".join(components) return test_module_path def get_test_module(test_file): test_module_path = get_module_path(test_file) test_module = importlib.import_module(test_module_path) return test_module def get_tester_classes(test_file): tester_classes = [] test_module = get_test_module(test_file) for attr in dir(test_module): if attr.endswith("ModelTester"): tester_classes.append(getattr(test_module, attr)) return sorted(tester_classes, key=lambda x: x.__name__) def get_test_classes(test_file): test_classes = [] test_module = get_test_module(test_file) for attr in dir(test_module): attr_value = getattr(test_module, attr) model_classes = getattr(attr_value, "all_model_classes", []) if len(model_classes) > 0: test_classes.append(attr_value) return sorted(test_classes, key=lambda x: x.__name__) def get_model_classes(test_file): test_classes = get_test_classes(test_file) model_classes = set() for test_class in test_classes: model_classes.update(test_class.all_model_classes) return sorted(model_classes, key=lambda x: x.__name__) def get_model_tester_from_test_class(test_class): test = test_class() if hasattr(test, "setUp"): test.setUp() model_tester = None if hasattr(test, "model_tester"): if test.model_tester is not None: model_tester = test.model_tester.__class__ return model_tester def get_test_classes_for_model(test_file, model_class): test_classes = get_test_classes(test_file) target_test_classes = [] for test_class in test_classes: if model_class in test_class.all_model_classes: target_test_classes.append(test_class) return sorted(target_test_classes, key=lambda x: x.__name__) def get_tester_classes_for_model(test_file, model_class): test_classes = get_test_classes_for_model(test_file, model_class) tester_classes = [] for test_class in test_classes: tester_class = get_model_tester_from_test_class(test_class) if tester_class is not None: tester_classes.append(tester_class) return sorted(tester_classes, key=lambda x: x.__name__) def get_test_to_tester_mapping(test_file): test_classes = get_test_classes(test_file) test_tester_mapping = {test_class: get_model_tester_from_test_class(test_class) for test_class in test_classes} return test_tester_mapping def get_model_to_test_mapping(test_file): model_classes = get_model_classes(test_file) model_test_mapping = { model_class: get_test_classes_for_model(test_file, model_class) for model_class in model_classes } return model_test_mapping def get_model_to_tester_mapping(test_file): model_classes = get_model_classes(test_file) model_to_tester_mapping = { model_class: get_tester_classes_for_model(test_file, model_class) for model_class in model_classes } return model_to_tester_mapping def to_json(o): if isinstance(o, str): return o elif isinstance(o, type): return o.__name__ elif isinstance(o, (list, tuple)): return [to_json(x) for x in o] elif isinstance(o, dict): return {to_json(k): to_json(v) for k, v in o.items()} else: return o
2022 the huggingface team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license when the output is short enough the output is surrounded by signs output when it is too long those signs are not present failures and success of the modeling tests time can be formatted as xx xx xx as xx or as x xx if the time spent was less than a minute text must be less than 3001 characters in slack sdk keep some room for adding truncated when necessary failuretext here has length 3000 failuretext here has length maxerrortext this dict will contain all the information relative to each doc test category failed list of failed tests failures dict in the format test errormessage link to the github action job 2022 the huggingface team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license when the output is short enough the output is surrounded by signs output when it is too long those signs are not present failures and success of the modeling tests time can be formatted as xx xx xx as xx or as x xx if the time spent was less than a minute text must be less than 3001 characters in slack sdk keep some room for adding truncated when necessary failure_text here has length 3000 failure_text here has length max_error_text this dict will contain all the information relative to each doc test category failed list of failed tests failures dict in the format test error_message link to the github action job
import collections import json import math import os import re import time from fnmatch import fnmatch from typing import Dict, List import requests from slack_sdk import WebClient client = WebClient(token=os.environ["CI_SLACK_BOT_TOKEN"]) def handle_test_results(test_results): expressions = test_results.split(" ") failed = 0 success = 0 time_spent = expressions[-2] if "=" in expressions[-1] else expressions[-1] for i, expression in enumerate(expressions): if "failed" in expression: failed += int(expressions[i - 1]) if "passed" in expression: success += int(expressions[i - 1]) return failed, success, time_spent def extract_first_line_failure(failures_short_lines): failures = {} file = None in_error = False for line in failures_short_lines.split("\n"): if re.search(r"_ \[doctest\]", line): in_error = True file = line.split(" ")[2] elif in_error and not line.split(" ")[0].isdigit(): failures[file] = line in_error = False return failures class Message: def __init__(self, title: str, doc_test_results: Dict): self.title = title self._time_spent = doc_test_results["time_spent"].split(",")[0] self.n_success = doc_test_results["success"] self.n_failures = doc_test_results["failures"] self.n_tests = self.n_success + self.n_failures self.doc_test_results = doc_test_results @property def time(self) -> str: time_spent = [self._time_spent] total_secs = 0 for time in time_spent: time_parts = time.split(":") if len(time_parts) == 1: time_parts = [0, 0, time_parts[0]] hours, minutes, seconds = int(time_parts[0]), int(time_parts[1]), float(time_parts[2]) total_secs += hours * 3600 + minutes * 60 + seconds hours, minutes, seconds = total_secs // 3600, (total_secs % 3600) // 60, total_secs % 60 return f"{int(hours)}h{int(minutes)}m{int(seconds)}s" @property def header(self) -> Dict: return {"type": "header", "text": {"type": "plain_text", "text": self.title}} @property def no_failures(self) -> Dict: return { "type": "section", "text": { "type": "plain_text", "text": f"🌞 There were no failures: all {self.n_tests} tests passed. The suite ran in {self.time}.", "emoji": True, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } @property def failures(self) -> Dict: return { "type": "section", "text": { "type": "plain_text", "text": ( f"There were {self.n_failures} failures, out of {self.n_tests} tests.\nThe suite ran in" f" {self.time}." ), "emoji": True, }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } @property def category_failures(self) -> List[Dict]: failure_blocks = [] MAX_ERROR_TEXT = 3000 - len("The following examples had failures:\n\n\n\n") - len("[Truncated]\n") line_length = 40 category_failures = {k: v["failed"] for k, v in doc_test_results.items() if isinstance(v, dict)} def single_category_failures(category, failures): text = "" if len(failures) == 0: return "" text += f"*{category} failures*:".ljust(line_length // 2).rjust(line_length // 2) + "\n" for idx, failure in enumerate(failures): new_text = text + f"`{failure}`\n" if len(new_text) > MAX_ERROR_TEXT: text = text + "[Truncated]\n" break text = new_text return text for category, failures in category_failures.items(): report = single_category_failures(category, failures) if len(report) == 0: continue block = { "type": "section", "text": { "type": "mrkdwn", "text": f"The following examples had failures:\n\n\n{report}\n", }, } failure_blocks.append(block) return failure_blocks @property def payload(self) -> str: blocks = [self.header] if self.n_failures > 0: blocks.append(self.failures) if self.n_failures > 0: blocks.extend(self.category_failures) if self.n_failures == 0: blocks.append(self.no_failures) return json.dumps(blocks) @staticmethod def error_out(): payload = [ { "type": "section", "text": { "type": "plain_text", "text": "There was an issue running the tests.", }, "accessory": { "type": "button", "text": {"type": "plain_text", "text": "Check Action results", "emoji": True}, "url": f"https://github.com/huggingface/transformers/actions/runs/{os.environ['GITHUB_RUN_ID']}", }, } ] print("Sending the following payload") print(json.dumps({"blocks": json.loads(payload)})) client.chat_postMessage( channel=os.environ["CI_SLACK_CHANNEL_ID_DAILY"], text="There was an issue running the tests.", blocks=payload, ) def post(self): print("Sending the following payload") print(json.dumps({"blocks": json.loads(self.payload)})) text = f"{self.n_failures} failures out of {self.n_tests} tests," if self.n_failures else "All tests passed." self.thread_ts = client.chat_postMessage( channel=os.environ["CI_SLACK_CHANNEL_ID_DAILY"], blocks=self.payload, text=text, ) def get_reply_blocks(self, job_name, job_link, failures, text): MAX_ERROR_TEXT = 3000 - len("[Truncated]") failure_text = "" for key, value in failures.items(): new_text = failure_text + f"*{key}*\n_{value}_\n\n" if len(new_text) > MAX_ERROR_TEXT: failure_text = failure_text + "[Truncated]" break failure_text = new_text title = job_name content = {"type": "section", "text": {"type": "mrkdwn", "text": text}} if job_link is not None: content["accessory"] = { "type": "button", "text": {"type": "plain_text", "text": "GitHub Action job", "emoji": True}, "url": job_link, } return [ {"type": "header", "text": {"type": "plain_text", "text": title.upper(), "emoji": True}}, content, {"type": "section", "text": {"type": "mrkdwn", "text": failure_text}}, ] def post_reply(self): if self.thread_ts is None: raise ValueError("Can only post reply if a post has been made.") job_link = self.doc_test_results.pop("job_link") self.doc_test_results.pop("failures") self.doc_test_results.pop("success") self.doc_test_results.pop("time_spent") sorted_dict = sorted(self.doc_test_results.items(), key=lambda t: t[0]) for job, job_result in sorted_dict: if len(job_result["failures"]): text = f"*Num failures* :{len(job_result['failed'])} \n" failures = job_result["failures"] blocks = self.get_reply_blocks(job, job_link, failures, text=text) print("Sending the following reply") print(json.dumps({"blocks": blocks})) client.chat_postMessage( channel=os.environ["CI_SLACK_CHANNEL_ID_DAILY"], text=f"Results for {job}", blocks=blocks, thread_ts=self.thread_ts["ts"], ) time.sleep(1) def get_job_links(): run_id = os.environ["GITHUB_RUN_ID"] url = f"https://api.github.com/repos/huggingface/transformers/actions/runs/{run_id}/jobs?per_page=100" result = requests.get(url).json() jobs = {} try: jobs.update({job["name"]: job["html_url"] for job in result["jobs"]}) pages_to_iterate_over = math.ceil((result["total_count"] - 100) / 100) for i in range(pages_to_iterate_over): result = requests.get(url + f"&page={i + 2}").json() jobs.update({job["name"]: job["html_url"] for job in result["jobs"]}) return jobs except Exception as e: print("Unknown error, could not fetch links.", e) return {} def retrieve_artifact(name: str): _artifact = {} if os.path.exists(name): files = os.listdir(name) for file in files: try: with open(os.path.join(name, file), encoding="utf-8") as f: _artifact[file.split(".")[0]] = f.read() except UnicodeDecodeError as e: raise ValueError(f"Could not open {os.path.join(name, file)}.") from e return _artifact def retrieve_available_artifacts(): class Artifact: def __init__(self, name: str): self.name = name self.paths = [] def __str__(self): return self.name def add_path(self, path: str): self.paths.append({"name": self.name, "path": path}) _available_artifacts: Dict[str, Artifact] = {} directories = filter(os.path.isdir, os.listdir()) for directory in directories: artifact_name = directory if artifact_name not in _available_artifacts: _available_artifacts[artifact_name] = Artifact(artifact_name) _available_artifacts[artifact_name].add_path(directory) return _available_artifacts if __name__ == "__main__": github_actions_job_links = get_job_links() available_artifacts = retrieve_available_artifacts() docs = collections.OrderedDict( [ ("*.py", "API Examples"), ("*.md", "MD Examples"), ] ) doc_test_results = { v: { "failed": [], "failures": {}, } for v in docs.values() } doc_test_results["job_link"] = github_actions_job_links.get("run_doctests") artifact_path = available_artifacts["doc_tests_gpu_test_reports"].paths[0] artifact = retrieve_artifact(artifact_path["name"]) if "stats" in artifact: failed, success, time_spent = handle_test_results(artifact["stats"]) doc_test_results["failures"] = failed doc_test_results["success"] = success doc_test_results["time_spent"] = time_spent[1:-1] + ", " all_failures = extract_first_line_failure(artifact["failures_short"]) for line in artifact["summary_short"].split("\n"): if re.search("FAILED", line): line = line.replace("FAILED ", "") line = line.split()[0].replace("\n", "") if "::" in line: file_path, test = line.split("::") else: file_path, test = line, line for file_regex in docs.keys(): if fnmatch(file_path, file_regex): category = docs[file_regex] doc_test_results[category]["failed"].append(test) failure = all_failures[test] if test in all_failures else "N/A" doc_test_results[category]["failures"][test] = failure break message = Message("🤗 Results of the doc tests.", doc_test_results) message.post() message.post_reply()
torchaudio 0 10 has no cudaenabled binary distributions torchaudio 0 10 has no cuda enabled binary distributions
import argparse import os past_versions_testing = { "pytorch": { "1.13": { "torch": "1.13.1", "torchvision": "0.14.1", "torchaudio": "0.13.1", "python": 3.9, "cuda": "cu116", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1" " --extra-index-url https://download.pytorch.org/whl/cu116" ), "base_image": "nvidia/cuda:11.6.2-cudnn8-devel-ubuntu20.04", }, "1.12": { "torch": "1.12.1", "torchvision": "0.13.1", "torchaudio": "0.12.1", "python": 3.9, "cuda": "cu113", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1" " --extra-index-url https://download.pytorch.org/whl/cu113" ), "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "1.11": { "torch": "1.11.0", "torchvision": "0.12.0", "torchaudio": "0.11.0", "python": 3.9, "cuda": "cu113", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0" " --extra-index-url https://download.pytorch.org/whl/cu113" ), "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "1.10": { "torch": "1.10.2", "torchvision": "0.11.3", "torchaudio": "0.10.2", "python": 3.9, "cuda": "cu113", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.10.2 torchvision==0.11.3 torchaudio==0.10.2" " --extra-index-url https://download.pytorch.org/whl/cu113" ), "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "1.9": { "torch": "1.9.1", "torchvision": "0.10.1", "torchaudio": "0.9.1", "python": 3.9, "cuda": "cu111", "install": ( "python3 -m pip install --no-cache-dir -U torch==1.9.1 torchvision==0.10.1 torchaudio==0.9.1" " --extra-index-url https://download.pytorch.org/whl/cu111" ), "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, }, "tensorflow": { "2.11": { "tensorflow": "2.11.1", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.11.1", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.10": { "tensorflow": "2.10.1", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.10.1", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.9": { "tensorflow": "2.9.3", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.9.3", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.8": { "tensorflow": "2.8.2", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.8.2", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.7": { "tensorflow": "2.7.3", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.7.3", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.6": { "tensorflow": "2.6.5", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.6.5", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, "2.5": { "tensorflow": "2.5.3", "install": "python3 -m pip install --no-cache-dir -U tensorflow==2.5.3", "base_image": "nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04", }, }, } if __name__ == "__main__": parser = argparse.ArgumentParser("Choose the framework and version to install") parser.add_argument( "--framework", help="The framework to install. Should be `torch` or `tensorflow`", type=str, required=True ) parser.add_argument("--version", help="The version of the framework to install.", type=str, required=True) args = parser.parse_args() info = past_versions_testing[args.framework][args.version] os.system(f'echo "export INSTALL_CMD=\'{info["install"]}\'" >> ~/.profile') print(f'echo "export INSTALL_CMD=\'{info["install"]}\'" >> ~/.profile') cuda = "" if args.framework == "pytorch": cuda = info["cuda"] os.system(f"echo \"export CUDA='{cuda}'\" >> ~/.profile") print(f"echo \"export CUDA='{cuda}'\" >> ~/.profile")
usrbinenv python3 codingutf8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this script dumps information about the environment usr bin env python3 coding utf 8 2020 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license this script dumps information about the environment
import os import sys import transformers os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" print("Python version:", sys.version) print("transformers version:", transformers.__version__) try: import torch print("Torch version:", torch.__version__) print("Cuda available:", torch.cuda.is_available()) print("Cuda version:", torch.version.cuda) print("CuDNN version:", torch.backends.cudnn.version()) print("Number of GPUs available:", torch.cuda.device_count()) print("NCCL version:", torch.cuda.nccl.version()) except ImportError: print("Torch version:", None) try: import deepspeed print("DeepSpeed version:", deepspeed.__version__) except ImportError: print("DeepSpeed version:", None) try: import tensorflow as tf print("TensorFlow version:", tf.__version__) print("TF GPUs available:", bool(tf.config.list_physical_devices("GPU"))) print("Number of TF GPUs available:", len(tf.config.list_physical_devices("GPU"))) except ImportError: print("TensorFlow version:", None)
codingutf8 2021 the huggingface team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that prepares the repository for releases or patches by updating all versions in the relevant places it also performs some postrelease cleanup by updating the links in the main readme to respective model doc pages from main to stable to prepare for a release use from the root of the repo on the release branch with bash python release py or use make prerelease to prepare for a patch release use from the root of the repo on the release branch with bash python release py patch or use make prepatch to do the postrelease cleanup use from the root of the repo on the main branch with bash python release py postrelease or use make postrelease all paths are defined with the intent that this script should be run from the root of the repo this maps a type of file to the pattern to look for when searching where the version is defined as well as the template to follow when replacing it with the new version this maps a type of file to its path in transformers update the version of transformers in one file args fname str the path to the file where we want to update the version version str the new version to set in the file filetype str the type of the file should be a key in replacepatterns update the version in all examples files args version str the new version to set in the examples removing some of the folders with nonactively maintained examples from the walk update the version in all needed files args version str the new version to set everywhere patch bool optional defaults to false whether or not this is a patch release we don t update the version in the examples for patch releases replace the links from main doc to stable doc in the model list of the readme if the introduction or the conclusion of the list change the prompts may need to be updated find the start of the list update the lines in the model list reads the current version in the main init do all the necessary prerelease steps figure out the next minor release version and ask confirmation update the version eveywhere cleanup the model list in the main readme args patch bool optional defaults to false whether or not this is a patch release first let s get the default version base version if we are in dev bump minor otherwise now let s ask nicely if we have found the right version do all the necesarry postrelease steps figure out the next dev version and ask confirmation update the version eveywhere cleanup the model list in the main readme first let s get the current version check with the user we got that right coding utf 8 2021 the huggingface team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that prepares the repository for releases or patches by updating all versions in the relevant places it also performs some post release cleanup by updating the links in the main readme to respective model doc pages from main to stable to prepare for a release use from the root of the repo on the release branch with bash python release py or use make pre release to prepare for a patch release use from the root of the repo on the release branch with bash python release py patch or use make pre patch to do the post release cleanup use from the root of the repo on the main branch with bash python release py post_release or use make post release all paths are defined with the intent that this script should be run from the root of the repo this maps a type of file to the pattern to look for when searching where the version is defined as well as the template to follow when replacing it with the new version this maps a type of file to its path in transformers update the version of transformers in one file args fname str the path to the file where we want to update the version version str the new version to set in the file file_type str the type of the file should be a key in replace_patterns update the version in all examples files args version str the new version to set in the examples removing some of the folders with non actively maintained examples from the walk update the version in all needed files args version str the new version to set everywhere patch bool optional defaults to false whether or not this is a patch release we don t update the version in the examples for patch releases replace the links from main doc to stable doc in the model list of the readme if the introduction or the conclusion of the list change the prompts may need to be updated find the start of the list update the lines in the model list reads the current version in the main __init__ do all the necessary pre release steps figure out the next minor release version and ask confirmation update the version eveywhere clean up the model list in the main readme args patch bool optional defaults to false whether or not this is a patch release first let s get the default version base version if we are in dev bump minor otherwise now let s ask nicely if we have found the right version do all the necesarry post release steps figure out the next dev version and ask confirmation update the version eveywhere clean up the model list in the main readme first let s get the current version check with the user we got that right
import argparse import os import re import packaging.version PATH_TO_EXAMPLES = "examples/" REPLACE_PATTERNS = { "examples": (re.compile(r'^check_min_version\("[^"]+"\)\s*$', re.MULTILINE), 'check_min_version("VERSION")\n'), "init": (re.compile(r'^__version__\s+=\s+"([^"]+)"\s*$', re.MULTILINE), '__version__ = "VERSION"\n'), "setup": (re.compile(r'^(\s*)version\s*=\s*"[^"]+",', re.MULTILINE), r'\1version="VERSION",'), } REPLACE_FILES = { "init": "src/transformers/__init__.py", "setup": "setup.py", } README_FILE = "README.md" def update_version_in_file(fname: str, version: str, file_type: str): with open(fname, "r", encoding="utf-8", newline="\n") as f: code = f.read() re_pattern, replace = REPLACE_PATTERNS[file_type] replace = replace.replace("VERSION", version) code = re_pattern.sub(replace, code) with open(fname, "w", encoding="utf-8", newline="\n") as f: f.write(code) def update_version_in_examples(version: str): for folder, directories, fnames in os.walk(PATH_TO_EXAMPLES): if "research_projects" in directories: directories.remove("research_projects") if "legacy" in directories: directories.remove("legacy") for fname in fnames: if fname.endswith(".py"): update_version_in_file(os.path.join(folder, fname), version, file_type="examples") def global_version_update(version: str, patch: bool = False): for pattern, fname in REPLACE_FILES.items(): update_version_in_file(fname, version, pattern) if not patch: update_version_in_examples(version) def clean_main_ref_in_model_list(): _start_prompt = "🤗 Transformers currently provides the following architectures" _end_prompt = "1. Want to contribute a new model?" with open(README_FILE, "r", encoding="utf-8", newline="\n") as f: lines = f.readlines() start_index = 0 while not lines[start_index].startswith(_start_prompt): start_index += 1 start_index += 1 index = start_index while not lines[index].startswith(_end_prompt): if lines[index].startswith("1."): lines[index] = lines[index].replace( "https://huggingface.co/docs/transformers/main/model_doc", "https://huggingface.co/docs/transformers/model_doc", ) index += 1 with open(README_FILE, "w", encoding="utf-8", newline="\n") as f: f.writelines(lines) def get_version() -> packaging.version.Version: with open(REPLACE_FILES["init"], "r") as f: code = f.read() default_version = REPLACE_PATTERNS["init"][0].search(code).groups()[0] return packaging.version.parse(default_version) def pre_release_work(patch: bool = False): default_version = get_version() if patch and default_version.is_devrelease: raise ValueError("Can't create a patch version from the dev branch, checkout a released version!") if default_version.is_devrelease: default_version = default_version.base_version elif patch: default_version = f"{default_version.major}.{default_version.minor}.{default_version.micro + 1}" else: default_version = f"{default_version.major}.{default_version.minor + 1}.0" version = input(f"Which version are you releasing? [{default_version}]") if len(version) == 0: version = default_version print(f"Updating version to {version}.") global_version_update(version, patch=patch) if not patch: print("Cleaning main README, don't forget to run `make fix-copies`.") clean_main_ref_in_model_list() def post_release_work(): current_version = get_version() dev_version = f"{current_version.major}.{current_version.minor + 1}.0.dev0" current_version = current_version.base_version version = input(f"Which version are we developing now? [{dev_version}]") if len(version) == 0: version = dev_version print(f"Updating version to {version}.") global_version_update(version) print("Cleaning main README, don't forget to run `make fix-copies`.") clean_main_ref_in_model_list() if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--post_release", action="store_true", help="Whether this is pre or post release.") parser.add_argument("--patch", action="store_true", help="Whether or not this is a patch release.") args = parser.parse_args() if not args.post_release: pre_release_work(patch=args.patch) elif args.patch: print("Nothing to do after a patch :-)") else: post_release_work()
codingutf8 2022 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that sorts the names in the auto mappings defines in the auto modules in alphabetical order use from the root of the repo with bash python utilssortautomappings py to autofix all the auto mappings used in make style to only check if the mappings are properly sorted as used in make quality do bash python utilssortautomappings py checkonly path are set with the intent you should run this script from the root of the repo re pattern that matches mapping introductions supermodelmappingnames ordereddict or supermodelmapping ordereddict re pattern that matches identifiers in mappings sort all auto mappings in a file args fname str the name of the file where we want to sort automappings overwrite bool optional defaults to false whether or not to fix and overwrite the file returns optionalbool returns none if overwritetrue otherwise returns true if the file has an automapping improperly sorted false if the file is okay start of a new mapping blocks either fit in one line or not sort blocks by their identifiers sort all auto mappings in the library args overwrite bool optional defaults to false whether or not to fix and overwrite the file coding utf 8 2022 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that sorts the names in the auto mappings defines in the auto modules in alphabetical order use from the root of the repo with bash python utils sort_auto_mappings py to auto fix all the auto mappings used in make style to only check if the mappings are properly sorted as used in make quality do bash python utils sort_auto_mappings py check_only path are set with the intent you should run this script from the root of the repo re pattern that matches mapping introductions super_model_mapping_names ordereddict or super_model_mapping ordereddict re pattern that matches identifiers in mappings sort all auto mappings in a file args fname str the name of the file where we want to sort auto mappings overwrite bool optional defaults to false whether or not to fix and overwrite the file returns optional bool returns none if overwrite true otherwise returns true if the file has an auto mapping improperly sorted false if the file is okay start of a new mapping blocks either fit in one line or not sort blocks by their identifiers sort all auto mappings in the library args overwrite bool optional defaults to false whether or not to fix and overwrite the file
import argparse import os import re from typing import Optional PATH_TO_AUTO_MODULE = "src/transformers/models/auto" _re_intro_mapping = re.compile(r"[A-Z_]+_MAPPING(\s+|_[A-Z_]+\s+)=\s+OrderedDict") _re_identifier = re.compile(r'\s*\(\s*"(\S[^"]+)"') def sort_auto_mapping(fname: str, overwrite: bool = False) -> Optional[bool]: with open(fname, "r", encoding="utf-8") as f: content = f.read() lines = content.split("\n") new_lines = [] line_idx = 0 while line_idx < len(lines): if _re_intro_mapping.search(lines[line_idx]) is not None: indent = len(re.search(r"^(\s*)\S", lines[line_idx]).groups()[0]) + 8 while not lines[line_idx].startswith(" " * indent + "("): new_lines.append(lines[line_idx]) line_idx += 1 blocks = [] while lines[line_idx].strip() != "]": if lines[line_idx].strip() == "(": start_idx = line_idx while not lines[line_idx].startswith(" " * indent + ")"): line_idx += 1 blocks.append("\n".join(lines[start_idx : line_idx + 1])) else: blocks.append(lines[line_idx]) line_idx += 1 blocks = sorted(blocks, key=lambda x: _re_identifier.search(x).groups()[0]) new_lines += blocks else: new_lines.append(lines[line_idx]) line_idx += 1 if overwrite: with open(fname, "w", encoding="utf-8") as f: f.write("\n".join(new_lines)) else: return "\n".join(new_lines) != content def sort_all_auto_mappings(overwrite: bool = False): fnames = [os.path.join(PATH_TO_AUTO_MODULE, f) for f in os.listdir(PATH_TO_AUTO_MODULE) if f.endswith(".py")] diffs = [sort_auto_mapping(fname, overwrite=overwrite) for fname in fnames] if not overwrite and any(diffs): failures = [f for f, d in zip(fnames, diffs) if d] raise ValueError( f"The following files have auto mappings that need sorting: {', '.join(failures)}. Run `make style` to fix" " this." ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--check_only", action="store_true", help="Whether to only check or fix style.") args = parser.parse_args() sort_all_auto_mappings(not args.check_only)
codingutf8 2021 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that updates the metadata of the transformers library in the repository huggingfacetransformersmetadata usage for an update as used by the github action updatemetadata bash python utilsupdatemetadata py token token commitsha commitsha usage to check all pipelines are properly defined in the constant pipelinetagsandautomodels of this script so that new pipelines are properly added as metadata as used in make repoconsistency bash python utilsupdatemetadata py checkonly all paths are set with the intent you should run this script from the root of the repo with the command python utilsupdatemetadata py this is to make sure the transformers module imported is the one in the repo regexes that match tfflaxpt model names will match any tf or flax model too so need to be in an else branch afterthe two previous regexes fill this with tuples pipelinetag modelmapping automodel split a camelcased name into words args identifier str the camelcased name to parse returns liststr the list of words in the identifier as seprated by capital letters example py camelcasesplitcamelcasedclass camel cased class regex thanks to https stackoverflow comquestions29916065howtodocamelcasesplitinpython generates a dataframe containing the supported auto classes for each model type using the content of the auto modules dictionary model names to config dictionaries flagging if each model prefix has a backend in pttfflax let s lookup through all transformers object once and find if models are supported by a given backend try again after removing the last word in the name now let s find the right processing class for each model in order we check if there is a processor then a tokenizer then a featureextractor then an imageprocessor default to autotokenizer if a model has nothing for backward compatibility update the table maping models to pipelines and auto classes without removing old keys if they don t exist anymore args table dictstr tuplestr str the existing table mapping model names to a tuple containing the pipeline tag and the autoclass name with which they should be used returns dictstr tuplestr str the updated table in the same format loop through all three frameworks the type of pipeline may not exist in this framework first extract all modelnames add pipeline tag and auto model class for those models update the metadata for the transformers repo in huggingfacetransformersmetadata args token str a valid token giving write access to huggingfacetransformersmetadata commitsha str the commit sha on transformers corresponding to this update sort the model classes to avoid some nondeterministic updates to create false update commits check all pipeline tags are properly defined in the pipelinetagsandautomodels constant of this script coding utf 8 2021 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license utility that updates the metadata of the transformers library in the repository huggingface transformers metadata usage for an update as used by the github action update_metadata bash python utils update_metadata py token token commit_sha commit_sha usage to check all pipelines are properly defined in the constant pipeline_tags_and_auto_models of this script so that new pipelines are properly added as metadata as used in make repo consistency bash python utils update_metadata py check only all paths are set with the intent you should run this script from the root of the repo with the command python utils update_metadata py this is to make sure the transformers module imported is the one in the repo regexes that match tf flax pt model names will match any tf or flax model too so need to be in an else branch afterthe two previous regexes fill this with tuples pipeline_tag model_mapping auto_model split a camel cased name into words args identifier str the camel cased name to parse returns list str the list of words in the identifier as seprated by capital letters example py camel_case_split camelcasedclass camel cased class regex thanks to https stackoverflow com questions 29916065 how to do camelcase split in python generates a dataframe containing the supported auto classes for each model type using the content of the auto modules dictionary model names to config dictionaries flagging if each model prefix has a backend in pt tf flax let s lookup through all transformers object once and find if models are supported by a given backend try again after removing the last word in the name now let s find the right processing class for each model in order we check if there is a processor then a tokenizer then a featureextractor then an imageprocessor default to autotokenizer if a model has nothing for backward compatibility update the table maping models to pipelines and auto classes without removing old keys if they don t exist anymore args table dict str tuple str str the existing table mapping model names to a tuple containing the pipeline tag and the auto class name with which they should be used returns dict str tuple str str the updated table in the same format loop through all three frameworks the type of pipeline may not exist in this framework first extract all model_names add pipeline tag and auto model class for those models update the metadata for the transformers repo in huggingface transformers metadata args token str a valid token giving write access to huggingface transformers metadata commit_sha str the commit sha on transformers corresponding to this update sort the model classes to avoid some nondeterministic updates to create false update commits check all pipeline tags are properly defined in the pipeline_tags_and_auto_models constant of this script
import argparse import collections import os import re import tempfile from typing import Dict, List, Tuple import pandas as pd from datasets import Dataset from huggingface_hub import hf_hub_download, upload_folder from transformers.utils import direct_transformers_import TRANSFORMERS_PATH = "src/transformers" transformers_module = direct_transformers_import(TRANSFORMERS_PATH) _re_tf_models = re.compile(r"TF(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") _re_flax_models = re.compile(r"Flax(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") _re_pt_models = re.compile(r"(.*)(?:Model|Encoder|Decoder|ForConditionalGeneration)") PIPELINE_TAGS_AND_AUTO_MODELS = [ ("pretraining", "MODEL_FOR_PRETRAINING_MAPPING_NAMES", "AutoModelForPreTraining"), ("feature-extraction", "MODEL_MAPPING_NAMES", "AutoModel"), ("audio-classification", "MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES", "AutoModelForAudioClassification"), ("text-generation", "MODEL_FOR_CAUSAL_LM_MAPPING_NAMES", "AutoModelForCausalLM"), ("automatic-speech-recognition", "MODEL_FOR_CTC_MAPPING_NAMES", "AutoModelForCTC"), ("image-classification", "MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES", "AutoModelForImageClassification"), ("image-segmentation", "MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES", "AutoModelForImageSegmentation"), ("image-to-image", "MODEL_FOR_IMAGE_TO_IMAGE_MAPPING_NAMES", "AutoModelForImageToImage"), ("fill-mask", "MODEL_FOR_MASKED_LM_MAPPING_NAMES", "AutoModelForMaskedLM"), ("object-detection", "MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES", "AutoModelForObjectDetection"), ( "zero-shot-object-detection", "MODEL_FOR_ZERO_SHOT_OBJECT_DETECTION_MAPPING_NAMES", "AutoModelForZeroShotObjectDetection", ), ("question-answering", "MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForQuestionAnswering"), ("text2text-generation", "MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES", "AutoModelForSeq2SeqLM"), ("text-classification", "MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES", "AutoModelForSequenceClassification"), ("automatic-speech-recognition", "MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES", "AutoModelForSpeechSeq2Seq"), ( "table-question-answering", "MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForTableQuestionAnswering", ), ("token-classification", "MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES", "AutoModelForTokenClassification"), ("multiple-choice", "MODEL_FOR_MULTIPLE_CHOICE_MAPPING_NAMES", "AutoModelForMultipleChoice"), ( "next-sentence-prediction", "MODEL_FOR_NEXT_SENTENCE_PREDICTION_MAPPING_NAMES", "AutoModelForNextSentencePrediction", ), ( "audio-frame-classification", "MODEL_FOR_AUDIO_FRAME_CLASSIFICATION_MAPPING_NAMES", "AutoModelForAudioFrameClassification", ), ("audio-xvector", "MODEL_FOR_AUDIO_XVECTOR_MAPPING_NAMES", "AutoModelForAudioXVector"), ( "document-question-answering", "MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForDocumentQuestionAnswering", ), ( "visual-question-answering", "MODEL_FOR_VISUAL_QUESTION_ANSWERING_MAPPING_NAMES", "AutoModelForVisualQuestionAnswering", ), ("image-to-text", "MODEL_FOR_FOR_VISION_2_SEQ_MAPPING_NAMES", "AutoModelForVision2Seq"), ( "zero-shot-image-classification", "MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES", "AutoModelForZeroShotImageClassification", ), ("depth-estimation", "MODEL_FOR_DEPTH_ESTIMATION_MAPPING_NAMES", "AutoModelForDepthEstimation"), ("video-classification", "MODEL_FOR_VIDEO_CLASSIFICATION_MAPPING_NAMES", "AutoModelForVideoClassification"), ("mask-generation", "MODEL_FOR_MASK_GENERATION_MAPPING_NAMES", "AutoModelForMaskGeneration"), ("text-to-audio", "MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING_NAMES", "AutoModelForTextToSpectrogram"), ("text-to-audio", "MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING_NAMES", "AutoModelForTextToWaveform"), ] def camel_case_split(identifier: str) -> List[str]: matches = re.finditer(".+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)", identifier) return [m.group(0) for m in matches] def get_frameworks_table() -> pd.DataFrame: config_maping_names = transformers_module.models.auto.configuration_auto.CONFIG_MAPPING_NAMES model_prefix_to_model_type = { config.replace("Config", ""): model_type for model_type, config in config_maping_names.items() } pt_models = collections.defaultdict(bool) tf_models = collections.defaultdict(bool) flax_models = collections.defaultdict(bool) for attr_name in dir(transformers_module): lookup_dict = None if _re_tf_models.match(attr_name) is not None: lookup_dict = tf_models attr_name = _re_tf_models.match(attr_name).groups()[0] elif _re_flax_models.match(attr_name) is not None: lookup_dict = flax_models attr_name = _re_flax_models.match(attr_name).groups()[0] elif _re_pt_models.match(attr_name) is not None: lookup_dict = pt_models attr_name = _re_pt_models.match(attr_name).groups()[0] if lookup_dict is not None: while len(attr_name) > 0: if attr_name in model_prefix_to_model_type: lookup_dict[model_prefix_to_model_type[attr_name]] = True break attr_name = "".join(camel_case_split(attr_name)[:-1]) all_models = set(list(pt_models.keys()) + list(tf_models.keys()) + list(flax_models.keys())) all_models = list(all_models) all_models.sort() data = {"model_type": all_models} data["pytorch"] = [pt_models[t] for t in all_models] data["tensorflow"] = [tf_models[t] for t in all_models] data["flax"] = [flax_models[t] for t in all_models] processors = {} for t in all_models: if t in transformers_module.models.auto.processing_auto.PROCESSOR_MAPPING_NAMES: processors[t] = "AutoProcessor" elif t in transformers_module.models.auto.tokenization_auto.TOKENIZER_MAPPING_NAMES: processors[t] = "AutoTokenizer" elif t in transformers_module.models.auto.image_processing_auto.IMAGE_PROCESSOR_MAPPING_NAMES: processors[t] = "AutoImageProcessor" elif t in transformers_module.models.auto.feature_extraction_auto.FEATURE_EXTRACTOR_MAPPING_NAMES: processors[t] = "AutoFeatureExtractor" else: processors[t] = "AutoTokenizer" data["processor"] = [processors[t] for t in all_models] return pd.DataFrame(data) def update_pipeline_and_auto_class_table(table: Dict[str, Tuple[str, str]]) -> Dict[str, Tuple[str, str]]: auto_modules = [ transformers_module.models.auto.modeling_auto, transformers_module.models.auto.modeling_tf_auto, transformers_module.models.auto.modeling_flax_auto, ] for pipeline_tag, model_mapping, auto_class in PIPELINE_TAGS_AND_AUTO_MODELS: model_mappings = [model_mapping, f"TF_{model_mapping}", f"FLAX_{model_mapping}"] auto_classes = [auto_class, f"TF_{auto_class}", f"Flax_{auto_class}"] for module, cls, mapping in zip(auto_modules, auto_classes, model_mappings): if not hasattr(module, mapping): continue model_names = [] for name in getattr(module, mapping).values(): if isinstance(name, str): model_names.append(name) else: model_names.extend(list(name)) table.update({model_name: (pipeline_tag, cls) for model_name in model_names}) return table def update_metadata(token: str, commit_sha: str): frameworks_table = get_frameworks_table() frameworks_dataset = Dataset.from_pandas(frameworks_table) resolved_tags_file = hf_hub_download( "huggingface/transformers-metadata", "pipeline_tags.json", repo_type="dataset", token=token ) tags_dataset = Dataset.from_json(resolved_tags_file) table = { tags_dataset[i]["model_class"]: (tags_dataset[i]["pipeline_tag"], tags_dataset[i]["auto_class"]) for i in range(len(tags_dataset)) } table = update_pipeline_and_auto_class_table(table) model_classes = sorted(table.keys()) tags_table = pd.DataFrame( { "model_class": model_classes, "pipeline_tag": [table[m][0] for m in model_classes], "auto_class": [table[m][1] for m in model_classes], } ) tags_dataset = Dataset.from_pandas(tags_table) with tempfile.TemporaryDirectory() as tmp_dir: frameworks_dataset.to_json(os.path.join(tmp_dir, "frameworks.json")) tags_dataset.to_json(os.path.join(tmp_dir, "pipeline_tags.json")) if commit_sha is not None: commit_message = ( f"Update with commit {commit_sha}\n\nSee: " f"https://github.com/huggingface/transformers/commit/{commit_sha}" ) else: commit_message = "Update" upload_folder( repo_id="huggingface/transformers-metadata", folder_path=tmp_dir, repo_type="dataset", token=token, commit_message=commit_message, ) def check_pipeline_tags(): in_table = {tag: cls for tag, _, cls in PIPELINE_TAGS_AND_AUTO_MODELS} pipeline_tasks = transformers_module.pipelines.SUPPORTED_TASKS missing = [] for key in pipeline_tasks: if key not in in_table: model = pipeline_tasks[key]["pt"] if isinstance(model, (list, tuple)): model = model[0] model = model.__name__ if model not in in_table.values(): missing.append(key) if len(missing) > 0: msg = ", ".join(missing) raise ValueError( "The following pipeline tags are not present in the `PIPELINE_TAGS_AND_AUTO_MODELS` constant inside " f"`utils/update_metadata.py`: {msg}. Please add them!" ) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--token", type=str, help="The token to use to push to the transformers-metadata dataset.") parser.add_argument("--commit_sha", type=str, help="The sha of the commit going with this update.") parser.add_argument("--check-only", action="store_true", help="Activate to just check all pipelines are present.") args = parser.parse_args() if args.check_only: check_pipeline_tags() else: update_metadata(args.token, args.commit_sha)
codingutf8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache orglicenseslicense2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license a script running createdummymodels py with a predefined set of arguments this file is intended to be used in a ci workflow file without the need of specifying arguments it creates and uploads tiny models for all model classes if their tiny versions are not on the hub yet as well as produces an updated version of testsutilstinymodelsummary json that updated file should be merged into the main branch of transformers so the pipeline testing will use the latest createdupdated tiny models each auto modeling files contains multiple mappings let s get them in a dynamic way all mappings in a single auto modeling file all model names defined in auto mappings remove a tiny model name if one of its framework implementation hasn t yet a tiny version on the hub all tiny model base names on hub all tiny model names on hub this has to be spawn to avoid hanging forever coding utf 8 2023 the huggingface inc team licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license a script running create_dummy_models py with a pre defined set of arguments this file is intended to be used in a ci workflow file without the need of specifying arguments it creates and uploads tiny models for all model classes if their tiny versions are not on the hub yet as well as produces an updated version of tests utils tiny_model_summary json that updated file should be merged into the main branch of transformers so the pipeline testing will use the latest created updated tiny models each auto modeling files contains multiple mappings let s get them in a dynamic way all mappings in a single auto modeling file all model names defined in auto mappings remove a tiny model name if one of its framework implementation hasn t yet a tiny version on the hub all tiny model base names on hub all tiny model names on hub this has to be spawn to avoid hanging forever
import argparse import copy import json import multiprocessing import os import time from create_dummy_models import COMPOSITE_MODELS, create_tiny_models from huggingface_hub import ModelFilter, hf_api import transformers from transformers import AutoFeatureExtractor, AutoImageProcessor, AutoTokenizer from transformers.image_processing_utils import BaseImageProcessor def get_all_model_names(): model_names = set() for module_name in ["modeling_auto", "modeling_tf_auto", "modeling_flax_auto"]: module = getattr(transformers.models.auto, module_name, None) if module is None: continue mapping_names = [ x for x in dir(module) if x.endswith("_MAPPING_NAMES") and (x.startswith("MODEL_") or x.startswith("TF_MODEL_") or x.startswith("FLAX_MODEL_")) ] for name in mapping_names: mapping = getattr(module, name) if mapping is not None: for v in mapping.values(): if isinstance(v, (list, tuple)): model_names.update(v) elif isinstance(v, str): model_names.add(v) return sorted(model_names) def get_tiny_model_names_from_repo(): model_names = set(get_all_model_names()) with open("tests/utils/tiny_model_summary.json") as fp: tiny_model_info = json.load(fp) tiny_models_names = set() for model_base_name in tiny_model_info: tiny_models_names.update(tiny_model_info[model_base_name]["model_classes"]) not_on_hub = model_names.difference(tiny_models_names) for model_name in copy.copy(tiny_models_names): if not model_name.startswith("TF") and f"TF{model_name}" in not_on_hub: tiny_models_names.remove(model_name) elif model_name.startswith("TF") and model_name[2:] in not_on_hub: tiny_models_names.remove(model_name) return sorted(tiny_models_names) def get_tiny_model_summary_from_hub(output_path): special_models = COMPOSITE_MODELS.values() model_names = get_all_model_names() models = hf_api.list_models( filter=ModelFilter( author="hf-internal-testing", ) ) _models = set() for x in models: model = x.modelId org, model = model.split("/") if not model.startswith("tiny-random-"): continue model = model.replace("tiny-random-", "") if not model[0].isupper(): continue if model not in model_names and model not in special_models: continue _models.add(model) models = sorted(_models) summary = {} for model in models: repo_id = f"hf-internal-testing/tiny-random-{model}" model = model.split("-")[0] try: repo_info = hf_api.repo_info(repo_id) content = { "tokenizer_classes": set(), "processor_classes": set(), "model_classes": set(), "sha": repo_info.sha, } except Exception: continue try: time.sleep(1) tokenizer_fast = AutoTokenizer.from_pretrained(repo_id) content["tokenizer_classes"].add(tokenizer_fast.__class__.__name__) except Exception: pass try: time.sleep(1) tokenizer_slow = AutoTokenizer.from_pretrained(repo_id, use_fast=False) content["tokenizer_classes"].add(tokenizer_slow.__class__.__name__) except Exception: pass try: time.sleep(1) img_p = AutoImageProcessor.from_pretrained(repo_id) content["processor_classes"].add(img_p.__class__.__name__) except Exception: pass try: time.sleep(1) feat_p = AutoFeatureExtractor.from_pretrained(repo_id) if not isinstance(feat_p, BaseImageProcessor): content["processor_classes"].add(feat_p.__class__.__name__) except Exception: pass try: time.sleep(1) model_class = getattr(transformers, model) m = model_class.from_pretrained(repo_id) content["model_classes"].add(m.__class__.__name__) except Exception: pass try: time.sleep(1) model_class = getattr(transformers, f"TF{model}") m = model_class.from_pretrained(repo_id) content["model_classes"].add(m.__class__.__name__) except Exception: pass content["tokenizer_classes"] = sorted(content["tokenizer_classes"]) content["processor_classes"] = sorted(content["processor_classes"]) content["model_classes"] = sorted(content["model_classes"]) summary[model] = content with open(os.path.join(output_path, "hub_tiny_model_summary.json"), "w") as fp: json.dump(summary, fp, ensure_ascii=False, indent=4) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--num_workers", default=1, type=int, help="The number of workers to run.") args = parser.parse_args() multiprocessing.set_start_method("spawn") output_path = "tiny_models" all = True model_types = None models_to_skip = get_tiny_model_names_from_repo() no_check = True upload = True organization = "hf-internal-testing" create_tiny_models( output_path, all, model_types, models_to_skip, no_check, upload, organization, token=os.environ.get("TOKEN", None), num_workers=args.num_workers, )
1k for chunksize since ethernet packet size is around 1500 bytes needed for windows 1k for chunk_size since ethernet packet size is around 1500 bytes
import os import sys import requests from tqdm import tqdm if len(sys.argv) != 2: print('You must enter the model name as a parameter, e.g.: download_model.py 124M') sys.exit(1) model = sys.argv[1] subdir = os.path.join('models', model) if not os.path.exists(subdir): os.makedirs(subdir) subdir = subdir.replace('\\','/') for filename in ['checkpoint','encoder.json','hparams.json','model.ckpt.data-00000-of-00001', 'model.ckpt.index', 'model.ckpt.meta', 'vocab.bpe']: r = requests.get("https://openaipublic.blob.core.windows.net/gpt-2/" + subdir + "/" + filename, stream=True) with open(os.path.join(subdir, filename), 'wb') as f: file_size = int(r.headers["content-length"]) chunk_size = 1000 with tqdm(ncols=100, desc="Fetching " + filename, total=file_size, unit_scale=True) as pbar: for chunk in r.iter_content(chunk_size=chunk_size): f.write(chunk) pbar.update(chunk_size)
byte pair encoding utilities import os import json import regex as re from functools import lrucache lrucache def bytestounicode bs listrangeord ord1listrangeord ord1listrangeord ord1 cs bs n 0 for b in range28 if b not in bs bs appendb cs append28n n 1 cs chrn for n in cs return dictzipbs cs def getpairsword pairs set prevchar word0 for char in word1 pairs addprevchar char prevchar char return pairs class encoder def initself encoder bpemerges errors replace self encoder encoder self decoder v k for k v in self encoder items self errors errors how to handle errors in decoding self byteencoder bytestounicode self bytedecoder v k for k v in self byteencoder items self bperanks dictzipbpemerges rangelenbpemerges self cache should haved added re ignorecase so bpe merges can happen for capitalized versions of contractions self pat re compiler s t re ve m ll d pl pn splpns ss def bpeself token if token in self cache return self cachetoken word tupletoken pairs getpairsword if not pairs return token while true bigram minpairs key lambda pair self bperanks getpair float inf if bigram not in self bperanks break first second bigram newword i 0 while i lenword try j word indexfirst i newword extendwordi j i j except newword extendwordi break if wordi first and i lenword1 and wordi1 second newword appendfirstsecond i 2 else newword appendwordi i 1 newword tuplenewword word newword if lenword 1 break else pairs getpairsword word joinword self cachetoken word return word def encodeself text bpetokens for token in re findallself pat text token joinself byteencoderb for b in token encode utf8 bpetokens extendself encoderbpetoken for bpetoken in self bpetoken split return bpetokens def decodeself tokens text joinself decodertoken for token in tokens text bytearrayself bytedecoderc for c in text decode utf8 errorsself errors return text def getencodermodelname modelsdir with openos path joinmodelsdir modelname encoder json r as f encoder json loadf with openos path joinmodelsdir modelname vocab bpe r encodingutf8 as f bpedata f read bpemerges tuplemergestr split for mergestr in bpedata split n 1 1 return encoder encoderencoder bpemergesbpemerges returns list of utf 8 byte and a corresponding list of unicode strings the reversible bpe codes work on unicode strings this means you need a large of unicode characters in your vocab if you want to avoid unks when you re at something like a 10b token dataset you end up needing around 5k for decent coverage this is a signficant percentage of your normal say 32k bpe vocab to avoid that we want lookup tables between utf 8 bytes and unicode strings and avoids mapping to whitespace control characters the bpe code barfs on return set of symbol pairs in a word word is represented as tuple of symbols symbols being variable length strings how to handle errors in decoding should haved added re ignorecase so bpe merges can happen for capitalized versions of contractions s t re ve m ll d p l p n s p l p n s s s
import os import json import regex as re from functools import lru_cache @lru_cache() def bytes_to_unicode(): bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1)) cs = bs[:] n = 0 for b in range(2**8): if b not in bs: bs.append(b) cs.append(2**8+n) n += 1 cs = [chr(n) for n in cs] return dict(zip(bs, cs)) def get_pairs(word): pairs = set() prev_char = word[0] for char in word[1:]: pairs.add((prev_char, char)) prev_char = char return pairs class Encoder: def __init__(self, encoder, bpe_merges, errors='replace'): self.encoder = encoder self.decoder = {v:k for k,v in self.encoder.items()} self.errors = errors self.byte_encoder = bytes_to_unicode() self.byte_decoder = {v:k for k, v in self.byte_encoder.items()} self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) self.cache = {} self.pat = re.compile(r) def bpe(self, token): if token in self.cache: return self.cache[token] word = tuple(token) pairs = get_pairs(word) if not pairs: return token while True: bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf'))) if bigram not in self.bpe_ranks: break first, second = bigram new_word = [] i = 0 while i < len(word): try: j = word.index(first, i) new_word.extend(word[i:j]) i = j except: new_word.extend(word[i:]) break if word[i] == first and i < len(word)-1 and word[i+1] == second: new_word.append(first+second) i += 2 else: new_word.append(word[i]) i += 1 new_word = tuple(new_word) word = new_word if len(word) == 1: break else: pairs = get_pairs(word) word = ' '.join(word) self.cache[token] = word return word def encode(self, text): bpe_tokens = [] for token in re.findall(self.pat, text): token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')) return bpe_tokens def decode(self, tokens): text = ''.join([self.decoder[token] for token in tokens]) text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors=self.errors) return text def get_encoder(model_name, models_dir): with open(os.path.join(models_dir, model_name, 'encoder.json'), 'r') as f: encoder = json.load(f) with open(os.path.join(models_dir, model_name, 'vocab.bpe'), 'r', encoding="utf-8") as f: bpe_data = f.read() bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split('\n')[1:-1]] return Encoder( encoder=encoder, bpe_merges=bpe_merges, )
usrbinenv python3 run the samplemodel modelname124m string which model to use seednone integer seed for random number generators fix seed to reproduce results nsamples0 number of samples to return if 0 continues to generate samples indefinately batchsize1 number of batches only affects speedmemory lengthnone number of tokens in generated text if none default is determined by model hyperparameters temperature1 float value controlling randomness in boltzmann distribution lower temperature results in less random completions as the temperature approaches zero the model will become deterministic and repetitive higher temperature results in more random completions topk0 integer value controlling diversity 1 means only 1 word is considered for each step token resulting in deterministic completions while 40 means 40 words are considered at each step 0 default is a special setting meaning no restrictions 40 generally is a good value modelsdir path to parent folder containing model subfolders i e contains the modelname folder usr bin env python3 run the sample_model model_name 124m string which model to use seed none integer seed for random number generators fix seed to reproduce results nsamples 0 number of samples to return if 0 continues to generate samples indefinately batch_size 1 number of batches only affects speed memory length none number of tokens in generated text if none default is determined by model hyperparameters temperature 1 float value controlling randomness in boltzmann distribution lower temperature results in less random completions as the temperature approaches zero the model will become deterministic and repetitive higher temperature results in more random completions top_k 0 integer value controlling diversity 1 means only 1 word is considered for each step token resulting in deterministic completions while 40 means 40 words are considered at each step 0 default is a special setting meaning no restrictions 40 generally is a good value models_dir path to parent folder containing model subfolders i e contains the model_name folder
import fire import json import os import numpy as np import tensorflow as tf import model, sample, encoder def sample_model( model_name='124M', seed=None, nsamples=0, batch_size=1, length=None, temperature=1, top_k=0, top_p=1, models_dir='models', ): models_dir = os.path.expanduser(os.path.expandvars(models_dir)) enc = encoder.get_encoder(model_name, models_dir) hparams = model.default_hparams() with open(os.path.join(models_dir, model_name, 'hparams.json')) as f: hparams.override_from_dict(json.load(f)) if length is None: length = hparams.n_ctx elif length > hparams.n_ctx: raise ValueError("Can't get samples longer than window size: %s" % hparams.n_ctx) with tf.Session(graph=tf.Graph()) as sess: np.random.seed(seed) tf.set_random_seed(seed) output = sample.sample_sequence( hparams=hparams, length=length, start_token=enc.encoder['<|endoftext|>'], batch_size=batch_size, temperature=temperature, top_k=top_k, top_p=top_p )[:, 1:] saver = tf.train.Saver() ckpt = tf.train.latest_checkpoint(os.path.join(models_dir, model_name)) saver.restore(sess, ckpt) generated = 0 while nsamples == 0 or generated < nsamples: out = sess.run(output) for i in range(batch_size): generated += batch_size text = enc.decode(out[i]) print("=" * 40 + " SAMPLE " + str(generated) + " " + "=" * 40) print(text) if __name__ == '__main__': fire.Fire(sample_model)
usrbinenv python3 interactively run the model modelname124m string which model to use seednone integer seed for random number generators fix seed to reproduce results nsamples1 number of samples to return total batchsize1 number of batches only affects speedmemory must divide nsamples lengthnone number of tokens in generated text if none default is determined by model hyperparameters temperature1 float value controlling randomness in boltzmann distribution lower temperature results in less random completions as the temperature approaches zero the model will become deterministic and repetitive higher temperature results in more random completions topk0 integer value controlling diversity 1 means only 1 word is considered for each step token resulting in deterministic completions while 40 means 40 words are considered at each step 0 default is a special setting meaning no restrictions 40 generally is a good value modelsdir path to parent folder containing model subfolders i e contains the modelname folder usr bin env python3 interactively run the model model_name 124m string which model to use seed none integer seed for random number generators fix seed to reproduce results nsamples 1 number of samples to return total batch_size 1 number of batches only affects speed memory must divide nsamples length none number of tokens in generated text if none default is determined by model hyperparameters temperature 1 float value controlling randomness in boltzmann distribution lower temperature results in less random completions as the temperature approaches zero the model will become deterministic and repetitive higher temperature results in more random completions top_k 0 integer value controlling diversity 1 means only 1 word is considered for each step token resulting in deterministic completions while 40 means 40 words are considered at each step 0 default is a special setting meaning no restrictions 40 generally is a good value models_dir path to parent folder containing model subfolders i e contains the model_name folder
import fire import json import os import numpy as np import tensorflow as tf import model, sample, encoder def interact_model( model_name='124M', seed=None, nsamples=1, batch_size=1, length=None, temperature=1, top_k=0, top_p=1, models_dir='models', ): models_dir = os.path.expanduser(os.path.expandvars(models_dir)) if batch_size is None: batch_size = 1 assert nsamples % batch_size == 0 enc = encoder.get_encoder(model_name, models_dir) hparams = model.default_hparams() with open(os.path.join(models_dir, model_name, 'hparams.json')) as f: hparams.override_from_dict(json.load(f)) if length is None: length = hparams.n_ctx // 2 elif length > hparams.n_ctx: raise ValueError("Can't get samples longer than window size: %s" % hparams.n_ctx) with tf.Session(graph=tf.Graph()) as sess: context = tf.placeholder(tf.int32, [batch_size, None]) np.random.seed(seed) tf.set_random_seed(seed) output = sample.sample_sequence( hparams=hparams, length=length, context=context, batch_size=batch_size, temperature=temperature, top_k=top_k, top_p=top_p ) saver = tf.train.Saver() ckpt = tf.train.latest_checkpoint(os.path.join(models_dir, model_name)) saver.restore(sess, ckpt) while True: raw_text = input("Model prompt >>> ") while not raw_text: print('Prompt should not be empty!') raw_text = input("Model prompt >>> ") context_tokens = enc.encode(raw_text) generated = 0 for _ in range(nsamples // batch_size): out = sess.run(output, feed_dict={ context: [context_tokens for _ in range(batch_size)] })[:, len(context_tokens):] for i in range(batch_size): generated += 1 text = enc.decode(out[i]) print("=" * 40 + " SAMPLE " + str(generated) + " " + "=" * 40) print(text) print("=" * 80) if __name__ == '__main__': fire.Fire(interact_model)
deal with dynamic shape in tensorflow cleanly static x shape aslist dynamic tf shapex return dynamici if s is none else s for i s in enumeratestatic def softmaxx axis1 x x tf reducemaxx axisaxis keepdimstrue ex tf expx return ex tf reducesumex axisaxis keepdimstrue def gelux return 0 5x1tf tanhnp sqrt2np pix0 044715tf powx 3 def normx scope axis1 epsilon1e5 reshape the last dimension of x into n x shape1n start m shapelistx return tf reshapex start n mn def mergestatesx 1 s in the lower triangle counting from the lower right corner same as tf matrixbandparttf onesnd ns 1 nsnd but doesn t produce garbage on tpus from batch sequence features to batch heads sequence features reverse of splitheads w has shape batch heads dstsequence srcsequence where information flows from src to dst q k v have shape batch heads sequence features add a new axis of given size value tf converttotensorvalue name value ndims value shape ndims return tf tiletf expanddimsvalue axis0 size 1ndims def positionsfortokens pastlength batchsize tf shapetokens0 nsteps tf shapetokens1 return expandtilepastlength tf rangensteps batchsize def modelhparams x pastnone scope model reusefalse with tf variablescopescope reusereuse results batch sequence shapelistx wpe tf getvariable wpe hparams nctx hparams nembd initializertf randomnormalinitializerstddev0 01 wte tf getvariable wte hparams nvocab hparams nembd initializertf randomnormalinitializerstddev0 02 pastlength 0 if past is none else tf shapepast2 h tf gatherwte x tf gatherwpe positionsforx pastlength transformer presents pasts tf unstackpast axis1 if past is not none else none hparams nlayer assert lenpasts hparams nlayer for layer past in enumeratepasts h present blockh hd layer pastpast hparamshparams presents appendpresent results present tf stackpresents axis1 h normh lnf language model loss do tokens n predict token n hflat tf reshapeh batchsequence hparams nembd logits tf matmulhflat wte transposebtrue logits tf reshapelogits batch sequence hparams nvocab results logits logits return results deal with dynamic shape in tensorflow cleanly normalize to mean 0 std 1 then do a diagonal affine transform reshape the last dimension of x into n x shape 1 n smash the last two dimensions of x into a single dimension 1 s in the lower triangle counting from the lower right corner same as tf matrix_band_part tf ones nd ns 1 ns nd but doesn t produce garbage on tpus should be batch sequence features should be batch 2 heads sequence features where 2 is k v from batch sequence features to batch heads sequence features reverse of split_heads w has shape batch heads dst_sequence src_sequence where information flows from src to dst q k v have shape batch heads sequence features add a new axis of given size transformer language model loss do tokens n predict token n
import numpy as np import tensorflow as tf from tensorflow.contrib.training import HParams def default_hparams(): return HParams( n_vocab=0, n_ctx=1024, n_embd=768, n_head=12, n_layer=12, ) def shape_list(x): static = x.shape.as_list() dynamic = tf.shape(x) return [dynamic[i] if s is None else s for i, s in enumerate(static)] def softmax(x, axis=-1): x = x - tf.reduce_max(x, axis=axis, keepdims=True) ex = tf.exp(x) return ex / tf.reduce_sum(ex, axis=axis, keepdims=True) def gelu(x): return 0.5*x*(1+tf.tanh(np.sqrt(2/np.pi)*(x+0.044715*tf.pow(x, 3)))) def norm(x, scope, *, axis=-1, epsilon=1e-5): with tf.variable_scope(scope): n_state = x.shape[-1].value g = tf.get_variable('g', [n_state], initializer=tf.constant_initializer(1)) b = tf.get_variable('b', [n_state], initializer=tf.constant_initializer(0)) u = tf.reduce_mean(x, axis=axis, keepdims=True) s = tf.reduce_mean(tf.square(x-u), axis=axis, keepdims=True) x = (x - u) * tf.rsqrt(s + epsilon) x = x*g + b return x def split_states(x, n): *start, m = shape_list(x) return tf.reshape(x, start + [n, m//n]) def merge_states(x): *start, a, b = shape_list(x) return tf.reshape(x, start + [a*b]) def conv1d(x, scope, nf, *, w_init_stdev=0.02): with tf.variable_scope(scope): *start, nx = shape_list(x) w = tf.get_variable('w', [1, nx, nf], initializer=tf.random_normal_initializer(stddev=w_init_stdev)) b = tf.get_variable('b', [nf], initializer=tf.constant_initializer(0)) c = tf.reshape(tf.matmul(tf.reshape(x, [-1, nx]), tf.reshape(w, [-1, nf]))+b, start+[nf]) return c def attention_mask(nd, ns, *, dtype): i = tf.range(nd)[:,None] j = tf.range(ns) m = i >= j - ns + nd return tf.cast(m, dtype) def attn(x, scope, n_state, *, past, hparams): assert x.shape.ndims == 3 assert n_state % hparams.n_head == 0 if past is not None: assert past.shape.ndims == 5 def split_heads(x): return tf.transpose(split_states(x, hparams.n_head), [0, 2, 1, 3]) def merge_heads(x): return merge_states(tf.transpose(x, [0, 2, 1, 3])) def mask_attn_weights(w): _, _, nd, ns = shape_list(w) b = attention_mask(nd, ns, dtype=w.dtype) b = tf.reshape(b, [1, 1, nd, ns]) w = w*b - tf.cast(1e10, w.dtype)*(1-b) return w def multihead_attn(q, k, v): w = tf.matmul(q, k, transpose_b=True) w = w * tf.rsqrt(tf.cast(v.shape[-1].value, w.dtype)) w = mask_attn_weights(w) w = softmax(w) a = tf.matmul(w, v) return a with tf.variable_scope(scope): c = conv1d(x, 'c_attn', n_state*3) q, k, v = map(split_heads, tf.split(c, 3, axis=2)) present = tf.stack([k, v], axis=1) if past is not None: pk, pv = tf.unstack(past, axis=1) k = tf.concat([pk, k], axis=-2) v = tf.concat([pv, v], axis=-2) a = multihead_attn(q, k, v) a = merge_heads(a) a = conv1d(a, 'c_proj', n_state) return a, present def mlp(x, scope, n_state, *, hparams): with tf.variable_scope(scope): nx = x.shape[-1].value h = gelu(conv1d(x, 'c_fc', n_state)) h2 = conv1d(h, 'c_proj', nx) return h2 def block(x, scope, *, past, hparams): with tf.variable_scope(scope): nx = x.shape[-1].value a, present = attn(norm(x, 'ln_1'), 'attn', nx, past=past, hparams=hparams) x = x + a m = mlp(norm(x, 'ln_2'), 'mlp', nx*4, hparams=hparams) x = x + m return x, present def past_shape(*, hparams, batch_size=None, sequence=None): return [batch_size, hparams.n_layer, 2, hparams.n_head, sequence, hparams.n_embd // hparams.n_head] def expand_tile(value, size): value = tf.convert_to_tensor(value, name='value') ndims = value.shape.ndims return tf.tile(tf.expand_dims(value, axis=0), [size] + [1]*ndims) def positions_for(tokens, past_length): batch_size = tf.shape(tokens)[0] nsteps = tf.shape(tokens)[1] return expand_tile(past_length + tf.range(nsteps), batch_size) def model(hparams, X, past=None, scope='model', reuse=False): with tf.variable_scope(scope, reuse=reuse): results = {} batch, sequence = shape_list(X) wpe = tf.get_variable('wpe', [hparams.n_ctx, hparams.n_embd], initializer=tf.random_normal_initializer(stddev=0.01)) wte = tf.get_variable('wte', [hparams.n_vocab, hparams.n_embd], initializer=tf.random_normal_initializer(stddev=0.02)) past_length = 0 if past is None else tf.shape(past)[-2] h = tf.gather(wte, X) + tf.gather(wpe, positions_for(X, past_length)) presents = [] pasts = tf.unstack(past, axis=1) if past is not None else [None] * hparams.n_layer assert len(pasts) == hparams.n_layer for layer, past in enumerate(pasts): h, present = block(h, 'h%d' % layer, past=past, hparams=hparams) presents.append(present) results['present'] = tf.stack(presents, axis=1) h = norm(h, 'ln_f') h_flat = tf.reshape(h, [batch*sequence, hparams.n_embd]) logits = tf.matmul(h_flat, wte, transpose_b=True) logits = tf.reshape(logits, [batch, sequence, hparams.n_vocab]) results['logits'] = logits return results
no truncation nucleus sampling batch logits shape aslist sortedlogits tf sortlogits direction descending axis1 cumulativeprobs tf cumsumtf nn softmaxsortedlogits axis1 axis1 indices tf stack tf range0 batch number of indices to include tf maximumtf reducesumtf castcumulativeprobs p tf int32 axis1 1 0 axis1 minvalues tf gatherndsortedlogits indices return tf where logits minvalues tf oneslikelogits 1e10 logits def samplesequence hparams length starttokennone batchsizenone contextnone temperature1 topk0 topp1 if starttoken is none assert context is not none specify exactly one of starttoken and context else assert context is none specify exactly one of starttoken and context context tf fillbatchsize 1 starttoken def stephparams tokens pastnone lmoutput model modelhparamshparams xtokens pastpast reusetf autoreuse logits lmoutput logits hparams nvocab presents lmoutput present presents setshapemodel pastshapehparamshparams batchsizebatchsize return logits logits presents presents with tf namescope samplesequence def bodypast prev output nextoutputs stephparams prev pastpast logits nextoutputs logits 1 tf tofloattemperature logits topklogitslogits ktopk logits topplogitslogits ptopp samples tf multinomiallogits numsamples1 outputdtypetf int32 return nextoutputs presents if past is none else tf concatpast nextoutputs presents axis2 samples tf concatoutput samples axis1 past prev output bodynone context context def condargs return true tokens tf whileloop condcond bodybody maximumiterationslength 1 loopvars past prev output shapeinvariants tf tensorshapemodel pastshapehparamshparams batchsizebatchsize tf tensorshapebatchsize none tf tensorshapebatchsize none backpropfalse return tokens no truncation nucleus sampling number of indices to include
import tensorflow as tf import model def top_k_logits(logits, k): if k == 0: return logits def _top_k(): values, _ = tf.nn.top_k(logits, k=k) min_values = values[:, -1, tf.newaxis] return tf.where( logits < min_values, tf.ones_like(logits, dtype=logits.dtype) * -1e10, logits, ) return tf.cond( tf.equal(k, 0), lambda: logits, lambda: _top_k(), ) def top_p_logits(logits, p): batch, _ = logits.shape.as_list() sorted_logits = tf.sort(logits, direction='DESCENDING', axis=-1) cumulative_probs = tf.cumsum(tf.nn.softmax(sorted_logits, axis=-1), axis=-1) indices = tf.stack([ tf.range(0, batch), tf.maximum(tf.reduce_sum(tf.cast(cumulative_probs <= p, tf.int32), axis=-1) - 1, 0), ], axis=-1) min_values = tf.gather_nd(sorted_logits, indices) return tf.where( logits < min_values, tf.ones_like(logits) * -1e10, logits, ) def sample_sequence(*, hparams, length, start_token=None, batch_size=None, context=None, temperature=1, top_k=0, top_p=1): if start_token is None: assert context is not None, 'Specify exactly one of start_token and context!' else: assert context is None, 'Specify exactly one of start_token and context!' context = tf.fill([batch_size, 1], start_token) def step(hparams, tokens, past=None): lm_output = model.model(hparams=hparams, X=tokens, past=past, reuse=tf.AUTO_REUSE) logits = lm_output['logits'][:, :, :hparams.n_vocab] presents = lm_output['present'] presents.set_shape(model.past_shape(hparams=hparams, batch_size=batch_size)) return { 'logits': logits, 'presents': presents, } with tf.name_scope('sample_sequence'): def body(past, prev, output): next_outputs = step(hparams, prev, past=past) logits = next_outputs['logits'][:, -1, :] / tf.to_float(temperature) logits = top_k_logits(logits, k=top_k) logits = top_p_logits(logits, p=top_p) samples = tf.multinomial(logits, num_samples=1, output_dtype=tf.int32) return [ next_outputs['presents'] if past is None else tf.concat([past, next_outputs['presents']], axis=-2), samples, tf.concat([output, samples], axis=1) ] past, prev, output = body(None, context, context) def cond(*args): return True _, _, tokens = tf.while_loop( cond=cond, body=body, maximum_iterations=length - 1, loop_vars=[ past, prev, output ], shape_invariants=[ tf.TensorShape(model.past_shape(hparams=hparams, batch_size=batch_size)), tf.TensorShape([batch_size, None]), tf.TensorShape([batch_size, None]), ], back_prop=False, ) return tokens
natural language toolkit applications package c 20012023 nltk project edward loper edlopergmail com steven bird stevenbird1gmail com url https www nltk org for license information see license txt interactive nltk applications chartparser chart parser chunkparser regularexpression chunk parser collocations find collocations in text concordance partofspeech concordancer nemo finding and replacing nemo regular expression tool rdparser recursive descent parser srparser shiftreduce parser wordnet wordnet browser import tkinterbased modules if tkinter is installed natural language toolkit applications package c 2001 2023 nltk project edward loper edloper gmail com steven bird stevenbird1 gmail com url https www nltk org for license information see license txt interactive nltk applications chartparser chart parser chunkparser regular expression chunk parser collocations find collocations in text concordance part of speech concordancer nemo finding and replacing nemo regular expression tool rdparser recursive descent parser srparser shift reduce parser wordnet wordnet browser import tkinter based modules if tkinter is installed
try: import tkinter except ImportError: import warnings warnings.warn("nltk.app package not loaded (please install Tkinter library).") else: from nltk.app.chartparser_app import app as chartparser from nltk.app.chunkparser_app import app as chunkparser from nltk.app.collocations_app import app as collocations from nltk.app.concordance_app import app as concordance from nltk.app.nemo_app import app as nemo from nltk.app.rdparser_app import app as rdparser from nltk.app.srparser_app import app as srparser from nltk.app.wordnet_app import app as wordnet try: from matplotlib import pylab except ImportError: import warnings warnings.warn("nltk.app.wordfreq not loaded (requires the matplotlib library).") else: from nltk.app.wordfreq_app import app as wordfreq
natural language toolkit wordfreq application c 20012023 nltk project sumukh ghodke sghodkecsse unimelb edu au url https www nltk org for license information see license txt natural language toolkit wordfreq application c 2001 2023 nltk project sumukh ghodke sghodke csse unimelb edu au url https www nltk org for license information see license txt
from matplotlib import pylab from nltk.corpus import gutenberg from nltk.text import Text def plot_word_freq_dist(text): fd = text.vocab() samples = [item for item, _ in fd.most_common(50)] values = [fd[sample] for sample in samples] values = [sum(values[: i + 1]) * 100.0 / fd.N() for i in range(len(values))] pylab.title(text.name) pylab.xlabel("Samples") pylab.ylabel("Cumulative Percentage") pylab.plot(values) pylab.xticks(range(len(samples)), [str(s) for s in samples], rotation=90) pylab.show() def app(): t1 = Text(gutenberg.words("melville-moby_dick.txt")) plot_word_freq_dist(t1) if __name__ == "__main__": app() __all__ = ["app"]
natural language toolkit some texts for exploration in chapter 1 of the book c 20012023 nltk project steven bird stevenbird1gmail com url https www nltk org for license information see license txt natural language toolkit some texts for exploration in chapter 1 of the book c 2001 2023 nltk project steven bird stevenbird1 gmail com url https www nltk org for license information see license txt
from nltk.corpus import ( genesis, gutenberg, inaugural, nps_chat, treebank, webtext, wordnet, ) from nltk.probability import FreqDist from nltk.text import Text from nltk.util import bigrams print("*** Introductory Examples for the NLTK Book ***") print("Loading text1, ..., text9 and sent1, ..., sent9") print("Type the name of the text or sentence to view it.") print("Type: 'texts()' or 'sents()' to list the materials.") text1 = Text(gutenberg.words("melville-moby_dick.txt")) print("text1:", text1.name) text2 = Text(gutenberg.words("austen-sense.txt")) print("text2:", text2.name) text3 = Text(genesis.words("english-kjv.txt"), name="The Book of Genesis") print("text3:", text3.name) text4 = Text(inaugural.words(), name="Inaugural Address Corpus") print("text4:", text4.name) text5 = Text(nps_chat.words(), name="Chat Corpus") print("text5:", text5.name) text6 = Text(webtext.words("grail.txt"), name="Monty Python and the Holy Grail") print("text6:", text6.name) text7 = Text(treebank.words(), name="Wall Street Journal") print("text7:", text7.name) text8 = Text(webtext.words("singles.txt"), name="Personals Corpus") print("text8:", text8.name) text9 = Text(gutenberg.words("chesterton-thursday.txt")) print("text9:", text9.name) def texts(): print("text1:", text1.name) print("text2:", text2.name) print("text3:", text3.name) print("text4:", text4.name) print("text5:", text5.name) print("text6:", text6.name) print("text7:", text7.name) print("text8:", text8.name) print("text9:", text9.name) sent1 = ["Call", "me", "Ishmael", "."] sent2 = [ "The", "family", "of", "Dashwood", "had", "long", "been", "settled", "in", "Sussex", ".", ] sent3 = [ "In", "the", "beginning", "God", "created", "the", "heaven", "and", "the", "earth", ".", ] sent4 = [ "Fellow", "-", "Citizens", "of", "the", "Senate", "and", "of", "the", "House", "of", "Representatives", ":", ] sent5 = [ "I", "have", "a", "problem", "with", "people", "PMing", "me", "to", "lol", "JOIN", ] sent6 = [ "SCENE", "1", ":", "[", "wind", "]", "[", "clop", "clop", "clop", "]", "KING", "ARTHUR", ":", "Whoa", "there", "!", ] sent7 = [ "Pierre", "Vinken", ",", "61", "years", "old", ",", "will", "join", "the", "board", "as", "a", "nonexecutive", "director", "Nov.", "29", ".", ] sent8 = [ "25", "SEXY", "MALE", ",", "seeks", "attrac", "older", "single", "lady", ",", "for", "discreet", "encounters", ".", ] sent9 = [ "THE", "suburb", "of", "Saffron", "Park", "lay", "on", "the", "sunset", "side", "of", "London", ",", "as", "red", "and", "ragged", "as", "a", "cloud", "of", "sunset", ".", ] def sents(): print("sent1:", " ".join(sent1)) print("sent2:", " ".join(sent2)) print("sent3:", " ".join(sent3)) print("sent4:", " ".join(sent4)) print("sent5:", " ".join(sent5)) print("sent6:", " ".join(sent6)) print("sent7:", " ".join(sent7)) print("sent8:", " ".join(sent8)) print("sent9:", " ".join(sent9))
natural language toolkit combinatory categorial grammar c 20012023 nltk project graeme gange ggangecsse unimelb edu au url https www nltk org for license information see license txt combinatory categorial grammar for more information see nltkdoccontribccgccg pdf natural language toolkit combinatory categorial grammar c 2001 2023 nltk project graeme gange ggange csse unimelb edu au url https www nltk org for license information see license txt combinatory categorial grammar for more information see nltk doc contrib ccg ccg pdf
from nltk.ccg.chart import CCGChart, CCGChartParser, CCGEdge, CCGLeafEdge from nltk.ccg.combinator import ( BackwardApplication, BackwardBx, BackwardCombinator, BackwardComposition, BackwardSx, BackwardT, DirectedBinaryCombinator, ForwardApplication, ForwardCombinator, ForwardComposition, ForwardSubstitution, ForwardT, UndirectedBinaryCombinator, UndirectedComposition, UndirectedFunctionApplication, UndirectedSubstitution, UndirectedTypeRaise, ) from nltk.ccg.lexicon import CCGLexicon
natural language toolkit combinatory categorial grammar c 20012023 nltk project graeme gange ggangecsse unimelb edu au url https www nltk org for license information see license txt the lexicon is constructed by calling lexicon fromstringlexicon string in order to construct a parser you also need a rule set the standard english rules are provided in chart as chart defaultruleset the parser can then be constructed by calling for example parser chart ccgchartparserlexicon ruleset parsing is then performed by running parser parsesentence split while this returns a list of trees the default representation of the produced trees is not very enlightening particularly given that it uses the same tree class as the cfg parsers it is probably better to call chart printccgderivationparse tree extracted from list which should print a nice representation of the derivation this entire process is shown far more clearly in the demonstration python chart py based on the edgei class from nltk a number of the properties of the edgei interface don t transfer well to ccgs however accessors class representing leaf edges in a ccg derivation accessors class implementing application of a binary combinator to a chart takes the directed combinator to apply apply a combinator the left right edges must be touching check if the two edges are permitted to combine if so generate the corresponding edge the representation of the combinator for printing derivations typeraising must be handled slightly differently to the other rules as the resulting rules only span a single edge rather than both edges class for applying forward type raising class for applying backward type raising common sets of combinators used for english derivations the standard english rule set chart parser for ccgs based largely on the chartparser class from nltk implements the cyk algorithm initialize leaf edges select a span for the new edges try all possible pairs of edges that could generate an edge for that span generate all possible combinations of the two edges output the resulting parses constructs the trees for a given parse unfortnunately the parse trees need to be constructed slightly differently to those in the default chart class so it has to be reimplemented displaying derivations get the leaves and initial categories construct a string with both the leaf word and corresponding category aligned display the derivation steps prints the sequence of derivation steps is a leaf word increment the span by the space occupied by the leaf find the width of the current derivation step is a leaf node don t print anything but account for the space occupied pad to the left with spaces followed by a sequence of and the derivation rule print the resulting category on a new line demonstration code construct the lexicon s np n vp primitive categories s is the target primitive det npn family of words pro np tv vpnp modal snpvp backslashes need to be escaped i pro word category mapping you pro the det variables have the special keyword var prevents permutation prevents composition and var var var which nnsnp will modal categories can be either explicit or families might modal cook tv eat tv mushrooms n parsnips n bacon n natural language toolkit combinatory categorial grammar c 2001 2023 nltk project graeme gange ggange csse unimelb edu au url https www nltk org for license information see license txt the lexicon is constructed by calling lexicon fromstring lexicon string in order to construct a parser you also need a rule set the standard english rules are provided in chart as chart defaultruleset the parser can then be constructed by calling for example parser chart ccgchartparser lexicon ruleset parsing is then performed by running parser parse sentence split while this returns a list of trees the default representation of the produced trees is not very enlightening particularly given that it uses the same tree class as the cfg parsers it is probably better to call chart printccgderivation parse tree extracted from list which should print a nice representation of the derivation this entire process is shown far more clearly in the demonstration python chart py based on the edgei class from nltk a number of the properties of the edgei interface don t transfer well to ccgs however accessors class representing leaf edges in a ccg derivation accessors class implementing application of a binary combinator to a chart takes the directed combinator to apply apply a combinator the left right edges must be touching check if the two edges are permitted to combine if so generate the corresponding edge the representation of the combinator for printing derivations type raising must be handled slightly differently to the other rules as the resulting rules only span a single edge rather than both edges class for applying forward type raising class for applying backward type raising common sets of combinators used for english derivations the standard english rule set chart parser for ccgs based largely on the chartparser class from nltk implements the cyk algorithm initialize leaf edges select a span for the new edges try all possible pairs of edges that could generate an edge for that span generate all possible combinations of the two edges output the resulting parses constructs the trees for a given parse unfortnunately the parse trees need to be constructed slightly differently to those in the default chart class so it has to be reimplemented displaying derivations get the leaves and initial categories construct a string with both the leaf word and corresponding category aligned display the derivation steps prints the sequence of derivation steps is a leaf word increment the span by the space occupied by the leaf find the width of the current derivation step is a leaf node don t print anything but account for the space occupied pad to the left with spaces followed by a sequence of and the derivation rule print the resulting category on a new line demonstration code construct the lexicon s np n vp primitive categories s is the target primitive det np n family of words pro np tv vp np modal s np vp backslashes need to be escaped i pro word category mapping you pro the det variables have the special keyword var prevents permutation prevents composition and var var var which n n s np will modal categories can be either explicit or families might modal cook tv eat tv mushrooms n parsnips n bacon n
import itertools from nltk.ccg.combinator import * from nltk.ccg.combinator import ( BackwardApplication, BackwardBx, BackwardComposition, BackwardSx, BackwardT, ForwardApplication, ForwardComposition, ForwardSubstitution, ForwardT, ) from nltk.ccg.lexicon import Token, fromstring from nltk.ccg.logic import * from nltk.parse import ParserI from nltk.parse.chart import AbstractChartRule, Chart, EdgeI from nltk.sem.logic import * from nltk.tree import Tree class CCGEdge(EdgeI): def __init__(self, span, categ, rule): self._span = span self._categ = categ self._rule = rule self._comparison_key = (span, categ, rule) def lhs(self): return self._categ def span(self): return self._span def start(self): return self._span[0] def end(self): return self._span[1] def length(self): return self._span[1] - self.span[0] def rhs(self): return () def dot(self): return 0 def is_complete(self): return True def is_incomplete(self): return False def nextsym(self): return None def categ(self): return self._categ def rule(self): return self._rule class CCGLeafEdge(EdgeI): def __init__(self, pos, token, leaf): self._pos = pos self._token = token self._leaf = leaf self._comparison_key = (pos, token.categ(), leaf) def lhs(self): return self._token.categ() def span(self): return (self._pos, self._pos + 1) def start(self): return self._pos def end(self): return self._pos + 1 def length(self): return 1 def rhs(self): return self._leaf def dot(self): return 0 def is_complete(self): return True def is_incomplete(self): return False def nextsym(self): return None def token(self): return self._token def categ(self): return self._token.categ() def leaf(self): return self._leaf class BinaryCombinatorRule(AbstractChartRule): NUMEDGES = 2 def __init__(self, combinator): self._combinator = combinator def apply(self, chart, grammar, left_edge, right_edge): if not (left_edge.end() == right_edge.start()): return if self._combinator.can_combine(left_edge.categ(), right_edge.categ()): for res in self._combinator.combine(left_edge.categ(), right_edge.categ()): new_edge = CCGEdge( span=(left_edge.start(), right_edge.end()), categ=res, rule=self._combinator, ) if chart.insert(new_edge, (left_edge, right_edge)): yield new_edge def __str__(self): return "%s" % self._combinator class ForwardTypeRaiseRule(AbstractChartRule): NUMEDGES = 2 def __init__(self): self._combinator = ForwardT def apply(self, chart, grammar, left_edge, right_edge): if not (left_edge.end() == right_edge.start()): return for res in self._combinator.combine(left_edge.categ(), right_edge.categ()): new_edge = CCGEdge(span=left_edge.span(), categ=res, rule=self._combinator) if chart.insert(new_edge, (left_edge,)): yield new_edge def __str__(self): return "%s" % self._combinator class BackwardTypeRaiseRule(AbstractChartRule): NUMEDGES = 2 def __init__(self): self._combinator = BackwardT def apply(self, chart, grammar, left_edge, right_edge): if not (left_edge.end() == right_edge.start()): return for res in self._combinator.combine(left_edge.categ(), right_edge.categ()): new_edge = CCGEdge(span=right_edge.span(), categ=res, rule=self._combinator) if chart.insert(new_edge, (right_edge,)): yield new_edge def __str__(self): return "%s" % self._combinator ApplicationRuleSet = [ BinaryCombinatorRule(ForwardApplication), BinaryCombinatorRule(BackwardApplication), ] CompositionRuleSet = [ BinaryCombinatorRule(ForwardComposition), BinaryCombinatorRule(BackwardComposition), BinaryCombinatorRule(BackwardBx), ] SubstitutionRuleSet = [ BinaryCombinatorRule(ForwardSubstitution), BinaryCombinatorRule(BackwardSx), ] TypeRaiseRuleSet = [ForwardTypeRaiseRule(), BackwardTypeRaiseRule()] DefaultRuleSet = ( ApplicationRuleSet + CompositionRuleSet + SubstitutionRuleSet + TypeRaiseRuleSet ) class CCGChartParser(ParserI): def __init__(self, lexicon, rules, trace=0): self._lexicon = lexicon self._rules = rules self._trace = trace def lexicon(self): return self._lexicon def parse(self, tokens): tokens = list(tokens) chart = CCGChart(list(tokens)) lex = self._lexicon for index in range(chart.num_leaves()): for token in lex.categories(chart.leaf(index)): new_edge = CCGLeafEdge(index, token, chart.leaf(index)) chart.insert(new_edge, ()) for span in range(2, chart.num_leaves() + 1): for start in range(0, chart.num_leaves() - span + 1): for part in range(1, span): lstart = start mid = start + part rend = start + span for left in chart.select(span=(lstart, mid)): for right in chart.select(span=(mid, rend)): for rule in self._rules: edges_added_by_rule = 0 for newedge in rule.apply(chart, lex, left, right): edges_added_by_rule += 1 return chart.parses(lex.start()) class CCGChart(Chart): def __init__(self, tokens): Chart.__init__(self, tokens) def _trees(self, edge, complete, memo, tree_class): assert complete, "CCGChart cannot build incomplete trees" if edge in memo: return memo[edge] if isinstance(edge, CCGLeafEdge): word = tree_class(edge.token(), [self._tokens[edge.start()]]) leaf = tree_class((edge.token(), "Leaf"), [word]) memo[edge] = [leaf] return [leaf] memo[edge] = [] trees = [] for cpl in self.child_pointer_lists(edge): child_choices = [self._trees(cp, complete, memo, tree_class) for cp in cpl] for children in itertools.product(*child_choices): lhs = ( Token( self._tokens[edge.start() : edge.end()], edge.lhs(), compute_semantics(children, edge), ), str(edge.rule()), ) trees.append(tree_class(lhs, children)) memo[edge] = trees return trees def compute_semantics(children, edge): if children[0].label()[0].semantics() is None: return None if len(children) == 2: if isinstance(edge.rule(), BackwardCombinator): children = [children[1], children[0]] combinator = edge.rule()._combinator function = children[0].label()[0].semantics() argument = children[1].label()[0].semantics() if isinstance(combinator, UndirectedFunctionApplication): return compute_function_semantics(function, argument) elif isinstance(combinator, UndirectedComposition): return compute_composition_semantics(function, argument) elif isinstance(combinator, UndirectedSubstitution): return compute_substitution_semantics(function, argument) else: raise AssertionError("Unsupported combinator '" + combinator + "'") else: return compute_type_raised_semantics(children[0].label()[0].semantics()) def printCCGDerivation(tree): leafcats = tree.pos() leafstr = "" catstr = "" for (leaf, cat) in leafcats: str_cat = "%s" % cat nextlen = 2 + max(len(leaf), len(str_cat)) lcatlen = (nextlen - len(str_cat)) // 2 rcatlen = lcatlen + (nextlen - len(str_cat)) % 2 catstr += " " * lcatlen + str_cat + " " * rcatlen lleaflen = (nextlen - len(leaf)) // 2 rleaflen = lleaflen + (nextlen - len(leaf)) % 2 leafstr += " " * lleaflen + leaf + " " * rleaflen print(leafstr.rstrip()) print(catstr.rstrip()) printCCGTree(0, tree) def printCCGTree(lwidth, tree): rwidth = lwidth if not isinstance(tree, Tree): return 2 + lwidth + len(tree) for child in tree: rwidth = max(rwidth, printCCGTree(rwidth, child)) if not isinstance(tree.label(), tuple): return max( rwidth, 2 + lwidth + len("%s" % tree.label()), 2 + lwidth + len(tree[0]) ) (token, op) = tree.label() if op == "Leaf": return rwidth print(lwidth * " " + (rwidth - lwidth) * "-" + "%s" % op) str_res = "%s" % (token.categ()) if token.semantics() is not None: str_res += " {" + str(token.semantics()) + "}" respadlen = (rwidth - lwidth - len(str_res)) // 2 + lwidth print(respadlen * " " + str_res) return rwidth lex = fromstring( ) def demo(): parser = CCGChartParser(lex, DefaultRuleSet) for parse in parser.parse("I might cook and eat the bacon".split()): printCCGDerivation(parse) if __name__ == "__main__": demo()
natural language toolkit combinatory categorial grammar c 20012023 nltk project graeme gange ggangecsse unimelb edu au url https www nltk org for license information see license txt ccg combinators abstract class for representing a binary combinator merely defines functions for checking if the function and argument are able to be combined and what the resulting category is note that as no assumptions are made as to direction the unrestricted combinators can perform all backward forward and crossed variations of the combinators these restrictions must be added in the rule class wrapper for the undirected binary combinator it takes left and right categories and decides which is to be the function and which the argument it then decides whether or not they can be combined class representing combinators where the primary functor is on the left takes an undirected combinator and a predicate which adds constraints restricting the cases in which it may apply the backward equivalent of the forwardcombinator class class representing function application implements rules of the form xy y x and the corresponding backwards application rule predicates for function application ensures the left functor takes an argument on the right ensures the right functor takes an argument on the left application combinator instances functional composition harmonic combinator implements rules of the form xy yz xz b and the corresponding backwards and crossed variations can only combine two functions and both functions must allow composition predicates for restricting application of straight composition predicates for crossed composition the functors must be crossed inwards permuting combinators must be allowed the resulting argument category is restricted to be primitive straight composition combinators backward crossed composition def cancombineself function argument if function isprimitive or argument isprimitive return false these could potentially be moved to the predicates as the constraints may not be general to all languages if function res isprimitive return false if not function arg isprimitive return false if not function dir cancompose and argument dir cancompose return false return function res arg argument res and function arg argument arg def combineself function argument if self cancombinefunction argument yield functionalcategory function res res argument arg argument dir def strself return s predicate for forward substitution def forwardsconstraintleft right if not bothforwardleft right return false return left res dir isforward and left arg isprimitive predicate for backward crossed substitution def backwardsxconstraintleft right if not left dir cancross and right dir cancross return false if not bothforwardleft right return false return right res dir isbackward and right arg isprimitive instances of substitution combinators forwardsubstitution forwardcombinatorundirectedsubstitution forwardsconstraint backwardsx backwardcombinatorundirectedsubstitution backwardsxconstraint x retrieves the leftmost functional category ie nnsnp nn def innermostfunctioncateg while categ res isfunction categ categ res return categ class undirectedtyperaiseundirectedbinarycombinator def cancombineself function arg the argument must be a function the restriction that arg res must be a function merely reduces redundant typeraising if arg res is primitive we have x yx t yyx yx y which is equivalent to x yx y if not arg isfunction and arg res isfunction return false arg innermostfunctionarg left argcateg are undefined subs left canunifyargcateg arg if subs is not none return true return false def combineself function arg if not function isprimitive and arg isfunction and arg res isfunction return typeraising matches only the innermost application arg innermostfunctionarg subs function canunifyarg arg if subs is not none xcat arg res substitutesubs yield functionalcategory xcat functionalcategoryxcat function arg dir arg dir def strself return t predicates for typeraising the direction of the innermost category must be towards the primary functor the restriction that the variable must be primitive is not common to all versions of ccgs some s have other restrictions def forwardtconstraintleft right arg innermostfunctionright return arg dir isbackward and arg res isprimitive def backwardtconstraintleft right arg innermostfunctionleft return arg dir isforward and arg res isprimitive instances of typeraising combinators forwardt forwardcombinatorundirectedtyperaise forwardtconstraint backwardt backwardcombinatorundirectedtyperaise backwardtconstraint natural language toolkit combinatory categorial grammar c 2001 2023 nltk project graeme gange ggange csse unimelb edu au url https www nltk org for license information see license txt ccg combinators abstract class for representing a binary combinator merely defines functions for checking if the function and argument are able to be combined and what the resulting category is note that as no assumptions are made as to direction the unrestricted combinators can perform all backward forward and crossed variations of the combinators these restrictions must be added in the rule class wrapper for the undirected binary combinator it takes left and right categories and decides which is to be the function and which the argument it then decides whether or not they can be combined class representing combinators where the primary functor is on the left takes an undirected combinator and a predicate which adds constraints restricting the cases in which it may apply the backward equivalent of the forwardcombinator class class representing function application implements rules of the form x y y x and the corresponding backwards application rule predicates for function application ensures the left functor takes an argument on the right ensures the right functor takes an argument on the left application combinator instances functional composition harmonic combinator implements rules of the form x y y z x z b and the corresponding backwards and crossed variations can only combine two functions and both functions must allow composition predicates for restricting application of straight composition predicates for crossed composition the functors must be crossed inwards permuting combinators must be allowed the resulting argument category is restricted to be primitive straight composition combinators backward crossed composition substitution permutation combinator implements rules of the form y z x y z x z sx and other variations these could potentially be moved to the predicates as the constraints may not be general to all languages predicate for forward substitution predicate for backward crossed substitution instances of substitution combinators retrieves the left most functional category ie n n s np n n undirected combinator for type raising the argument must be a function the restriction that arg res must be a function merely reduces redundant type raising if arg res is primitive we have x y x t y y x y x y which is equivalent to x y x y left arg_categ are undefined type raising matches only the innermost application predicates for type raising the direction of the innermost category must be towards the primary functor the restriction that the variable must be primitive is not common to all versions of ccgs some s have other restrictions instances of type raising combinators
from abc import ABCMeta, abstractmethod from nltk.ccg.api import FunctionalCategory class UndirectedBinaryCombinator(metaclass=ABCMeta): @abstractmethod def can_combine(self, function, argument): pass @abstractmethod def combine(self, function, argument): pass class DirectedBinaryCombinator(metaclass=ABCMeta): @abstractmethod def can_combine(self, left, right): pass @abstractmethod def combine(self, left, right): pass class ForwardCombinator(DirectedBinaryCombinator): def __init__(self, combinator, predicate, suffix=""): self._combinator = combinator self._predicate = predicate self._suffix = suffix def can_combine(self, left, right): return self._combinator.can_combine(left, right) and self._predicate( left, right ) def combine(self, left, right): yield from self._combinator.combine(left, right) def __str__(self): return f">{self._combinator}{self._suffix}" class BackwardCombinator(DirectedBinaryCombinator): def __init__(self, combinator, predicate, suffix=""): self._combinator = combinator self._predicate = predicate self._suffix = suffix def can_combine(self, left, right): return self._combinator.can_combine(right, left) and self._predicate( left, right ) def combine(self, left, right): yield from self._combinator.combine(right, left) def __str__(self): return f"<{self._combinator}{self._suffix}" class UndirectedFunctionApplication(UndirectedBinaryCombinator): def can_combine(self, function, argument): if not function.is_function(): return False return not function.arg().can_unify(argument) is None def combine(self, function, argument): if not function.is_function(): return subs = function.arg().can_unify(argument) if subs is None: return yield function.res().substitute(subs) def __str__(self): return "" def forwardOnly(left, right): return left.dir().is_forward() def backwardOnly(left, right): return right.dir().is_backward() ForwardApplication = ForwardCombinator(UndirectedFunctionApplication(), forwardOnly) BackwardApplication = BackwardCombinator(UndirectedFunctionApplication(), backwardOnly) class UndirectedComposition(UndirectedBinaryCombinator): def can_combine(self, function, argument): if not (function.is_function() and argument.is_function()): return False if function.dir().can_compose() and argument.dir().can_compose(): return not function.arg().can_unify(argument.res()) is None return False def combine(self, function, argument): if not (function.is_function() and argument.is_function()): return if function.dir().can_compose() and argument.dir().can_compose(): subs = function.arg().can_unify(argument.res()) if subs is not None: yield FunctionalCategory( function.res().substitute(subs), argument.arg().substitute(subs), argument.dir(), ) def __str__(self): return "B" def bothForward(left, right): return left.dir().is_forward() and right.dir().is_forward() def bothBackward(left, right): return left.dir().is_backward() and right.dir().is_backward() def crossedDirs(left, right): return left.dir().is_forward() and right.dir().is_backward() def backwardBxConstraint(left, right): if not crossedDirs(left, right): return False if not left.dir().can_cross() and right.dir().can_cross(): return False return left.arg().is_primitive() ForwardComposition = ForwardCombinator(UndirectedComposition(), forwardOnly) BackwardComposition = BackwardCombinator(UndirectedComposition(), backwardOnly) BackwardBx = BackwardCombinator( UndirectedComposition(), backwardBxConstraint, suffix="x" ) class UndirectedSubstitution(UndirectedBinaryCombinator): r def can_combine(self, function, argument): if function.is_primitive() or argument.is_primitive(): return False if function.res().is_primitive(): return False if not function.arg().is_primitive(): return False if not (function.dir().can_compose() and argument.dir().can_compose()): return False return (function.res().arg() == argument.res()) and ( function.arg() == argument.arg() ) def combine(self, function, argument): if self.can_combine(function, argument): yield FunctionalCategory( function.res().res(), argument.arg(), argument.dir() ) def __str__(self): return "S" def forwardSConstraint(left, right): if not bothForward(left, right): return False return left.res().dir().is_forward() and left.arg().is_primitive() def backwardSxConstraint(left, right): if not left.dir().can_cross() and right.dir().can_cross(): return False if not bothForward(left, right): return False return right.res().dir().is_backward() and right.arg().is_primitive() ForwardSubstitution = ForwardCombinator(UndirectedSubstitution(), forwardSConstraint) BackwardSx = BackwardCombinator(UndirectedSubstitution(), backwardSxConstraint, "x") def innermostFunction(categ): while categ.res().is_function(): categ = categ.res() return categ class UndirectedTypeRaise(UndirectedBinaryCombinator): def can_combine(self, function, arg): if not (arg.is_function() and arg.res().is_function()): return False arg = innermostFunction(arg) subs = left.can_unify(arg_categ.arg()) if subs is not None: return True return False def combine(self, function, arg): if not ( function.is_primitive() and arg.is_function() and arg.res().is_function() ): return arg = innermostFunction(arg) subs = function.can_unify(arg.arg()) if subs is not None: xcat = arg.res().substitute(subs) yield FunctionalCategory( xcat, FunctionalCategory(xcat, function, arg.dir()), -(arg.dir()) ) def __str__(self): return "T" def forwardTConstraint(left, right): arg = innermostFunction(right) return arg.dir().is_backward() and arg.res().is_primitive() def backwardTConstraint(left, right): arg = innermostFunction(left) return arg.dir().is_forward() and arg.res().is_primitive() ForwardT = ForwardCombinator(UndirectedTypeRaise(), forwardTConstraint) BackwardT = BackwardCombinator(UndirectedTypeRaise(), backwardTConstraint)
natural language toolkit combinatory categorial grammar c 20012023 nltk project graeme gange ggangecsse unimelb edu au url https www nltk org for license information see license txt ccg lexicons regular expressions used for parsing components of the lexicon parses a primitive category and subscripts separates the next primitive category from the remainder of the string separates the next application operator from the remainder parses the definition of the righthand side rhs of either a word or a family parses the right hand side that contains category and maybe semantic predicate parses the semantic predicate strips comments from a line class representing a token token category semantics e g eat svarplvar x y eatx y token string categ string semantics expression class representing a lexicon for ccg grammars primitives the list of primitive categories for the lexicon families families of categories entries a mapping of words to possible categories returns all the possible categories for a word return the target category for the parser string representation of the lexicon used for debugging parsing lexicons separate the contents matching the first set of brackets from the rest of the input separate the string for the next portion of the category from the rest of the string parse an application operator parse the subscripts for a primitive category parse a primitive category if the primitive is the special category var replace it with the correct ccgvar parse a string representing a category and returns a tuple with possibly the ccg variable for the category convert string representation into a lexicon for ccgs strip comments and leadingtrailing whitespace a line of primitive categories the first one is the target category ie s n np vp either a family definition or a word definition family definition ie det npn word definition ie which nnsnp rather minimal lexicon based on the openccg tinytiny grammar only incorporates a subset of the morphological subcategories however s np n primitive categories det npn determiners pro np intransvsg snpsg tensed intransitive verbs singular intransvpl snppl plural transvsg snpsgnp tensed transitive verbs singular transvpl snpplnp plural the npsgnsg the npplnpl i pro me pro we pro us pro book nsg books npl peach nsg peaches npl policeman nsg policemen npl boy nsg boys npl sleep intransvsg sleep intransvpl eat intransvpl eat transvpl eats intransvsg eats transvsg see transvpl sees transvsg natural language toolkit combinatory categorial grammar c 2001 2023 nltk project graeme gange ggange csse unimelb edu au url https www nltk org for license information see license txt ccg lexicons regular expressions used for parsing components of the lexicon parses a primitive category and subscripts a za z a za z separates the next primitive category from the remainder of the string a za z a za z separates the next application operator from the remainder parses the definition of the right hand side rhs of either a word or a family s_ s s parses the right hand side that contains category and maybe semantic predicate s parses the semantic predicate strips comments from a line class representing a token token category semantics e g eat s var pl var x y eat x y token string categ string semantics expression class representing a lexicon for ccg grammars primitives the list of primitive categories for the lexicon families families of categories entries a mapping of words to possible categories returns all the possible categories for a word return the target category for the parser string representation of the lexicon used for debugging parsing lexicons separate the contents matching the first set of brackets from the rest of the input separate the string for the next portion of the category from the rest of the string parse an application operator parse the subscripts for a primitive category parse a primitive category if the primitive is the special category var replace it with the correct ccgvar parse a string representing a category and returns a tuple with possibly the ccg variable for the category convert string representation into a lexicon for ccgs strip comments and leading trailing whitespace a line of primitive categories the first one is the target category ie s n np vp either a family definition or a word definition family definition ie det np n word definition ie which n n s np rather minimal lexicon based on the openccg tinytiny grammar only incorporates a subset of the morphological subcategories however s np n primitive categories det np n determiners pro np intransvsg s np sg tensed intransitive verbs singular intransvpl s np pl plural transvsg s np sg np tensed transitive verbs singular transvpl s np pl np plural the np sg n sg the np pl n pl i pro me pro we pro us pro book n sg books n pl peach n sg peaches n pl policeman n sg policemen n pl boy n sg boys n pl sleep intransvsg sleep intransvpl eat intransvpl eat transvpl eats intransvsg eats transvsg see transvpl sees transvsg
import re from collections import defaultdict from nltk.ccg.api import CCGVar, Direction, FunctionalCategory, PrimitiveCategory from nltk.internals import deprecated from nltk.sem.logic import Expression PRIM_RE = re.compile(r) NEXTPRIM_RE = re.compile(r) APP_RE = re.compile(r) LEX_RE = re.compile(r, re.UNICODE) RHS_RE = re.compile(r, re.UNICODE) SEMANTICS_RE = re.compile(r, re.UNICODE) COMMENTS_RE = re.compile() class Token: def __init__(self, token, categ, semantics=None): self._token = token self._categ = categ self._semantics = semantics def categ(self): return self._categ def semantics(self): return self._semantics def __str__(self): semantics_str = "" if self._semantics is not None: semantics_str = " {" + str(self._semantics) + "}" return "" + str(self._categ) + semantics_str def __cmp__(self, other): if not isinstance(other, Token): return -1 return cmp((self._categ, self._semantics), other.categ(), other.semantics()) class CCGLexicon: def __init__(self, start, primitives, families, entries): self._start = PrimitiveCategory(start) self._primitives = primitives self._families = families self._entries = entries def categories(self, word): return self._entries[word] def start(self): return self._start def __str__(self): string = "" first = True for ident in sorted(self._entries): if not first: string = string + "\n" string = string + ident + " => " first = True for cat in self._entries[ident]: if not first: string = string + " | " else: first = False string = string + "%s" % cat return string def matchBrackets(string): rest = string[1:] inside = "(" while rest != "" and not rest.startswith(")"): if rest.startswith("("): (part, rest) = matchBrackets(rest) inside = inside + part else: inside = inside + rest[0] rest = rest[1:] if rest.startswith(")"): return (inside + ")", rest[1:]) raise AssertionError("Unmatched bracket in string '" + string + "'") def nextCategory(string): if string.startswith("("): return matchBrackets(string) return NEXTPRIM_RE.match(string).groups() def parseApplication(app): return Direction(app[0], app[1:]) def parseSubscripts(subscr): if subscr: return subscr[1:-1].split(",") return [] def parsePrimitiveCategory(chunks, primitives, families, var): if chunks[0] == "var": if chunks[1] is None: if var is None: var = CCGVar() return (var, var) catstr = chunks[0] if catstr in families: (cat, cvar) = families[catstr] if var is None: var = cvar else: cat = cat.substitute([(cvar, var)]) return (cat, var) if catstr in primitives: subscrs = parseSubscripts(chunks[1]) return (PrimitiveCategory(catstr, subscrs), var) raise AssertionError( "String '" + catstr + "' is neither a family nor primitive category." ) def augParseCategory(line, primitives, families, var=None): (cat_string, rest) = nextCategory(line) if cat_string.startswith("("): (res, var) = augParseCategory(cat_string[1:-1], primitives, families, var) else: (res, var) = parsePrimitiveCategory( PRIM_RE.match(cat_string).groups(), primitives, families, var ) while rest != "": app = APP_RE.match(rest).groups() direction = parseApplication(app[0:3]) rest = app[3] (cat_string, rest) = nextCategory(rest) if cat_string.startswith("("): (arg, var) = augParseCategory(cat_string[1:-1], primitives, families, var) else: (arg, var) = parsePrimitiveCategory( PRIM_RE.match(cat_string).groups(), primitives, families, var ) res = FunctionalCategory(res, arg, direction) return (res, var) def fromstring(lex_str, include_semantics=False): CCGVar.reset_id() primitives = [] families = {} entries = defaultdict(list) for line in lex_str.splitlines(): line = COMMENTS_RE.match(line).groups()[0].strip() if line == "": continue if line.startswith(":-"): primitives = primitives + [ prim.strip() for prim in line[2:].strip().split(",") ] else: (ident, sep, rhs) = LEX_RE.match(line).groups() (catstr, semantics_str) = RHS_RE.match(rhs).groups() (cat, var) = augParseCategory(catstr, primitives, families) if sep == "::": families[ident] = (cat, var) else: semantics = None if include_semantics is True: if semantics_str is None: raise AssertionError( line + " must contain semantics because include_semantics is set to True" ) else: semantics = Expression.fromstring( SEMANTICS_RE.match(semantics_str).groups()[0] ) entries[ident].append(Token(ident, cat, semantics)) return CCGLexicon(primitives[0], primitives, families, entries) @deprecated("Use fromstring() instead.") def parseLexicon(lex_str): return fromstring(lex_str) openccg_tinytiny = fromstring( )
natural language toolkit combinatory categorial grammar c 20012023 nltk project tanin na nakorn tanin url https www nltk org for license information see license txt helper functions for ccg semantics computation natural language toolkit combinatory categorial grammar c 2001 2023 nltk project tanin na nakorn tanin url https www nltk org for license information see license txt helper functions for ccg semantics computation
from nltk.sem.logic import * def compute_type_raised_semantics(semantics): core = semantics parent = None while isinstance(core, LambdaExpression): parent = core core = core.term var = Variable("F") while var in core.free(): var = unique_variable(pattern=var) core = ApplicationExpression(FunctionVariableExpression(var), core) if parent is not None: parent.term = core else: semantics = core return LambdaExpression(var, semantics) def compute_function_semantics(function, argument): return ApplicationExpression(function, argument).simplify() def compute_composition_semantics(function, argument): assert isinstance(argument, LambdaExpression), ( "`" + str(argument) + "` must be a lambda expression" ) return LambdaExpression( argument.variable, ApplicationExpression(function, argument.term).simplify() ) def compute_substitution_semantics(function, argument): assert isinstance(function, LambdaExpression) and isinstance( function.term, LambdaExpression ), ("`" + str(function) + "` must be a lambda expression with 2 arguments") assert isinstance(argument, LambdaExpression), ( "`" + str(argument) + "` must be a lambda expression" ) new_argument = ApplicationExpression( argument, VariableExpression(function.variable) ).simplify() new_term = ApplicationExpression(function.term, new_argument).simplify() return LambdaExpression(function.variable, new_term)
natural language toolkit chatbots c 20012023 nltk project s steven bird stevenbird1gmail com url https www nltk org for license information see license txt based on an eliza implementation by joe strout joestrout net jeff epler jeplerinetnebr com and jez higgins jezjezuk co uk a class for simple chatbots these perform simple pattern matching on sentences typed by users and respond with automatically generated sentences these chatbots may not work using the windows command line or the windows idle gui natural language toolkit chatbots c 2001 2023 nltk project s steven bird stevenbird1 gmail com url https www nltk org for license information see license txt based on an eliza implementation by joe strout joe strout net jeff epler jepler inetnebr com and jez higgins jez jezuk co uk a class for simple chatbots these perform simple pattern matching on sentences typed by users and respond with automatically generated sentences these chatbots may not work using the windows command line or the windows idle gui
from nltk.chat.eliza import eliza_chat from nltk.chat.iesha import iesha_chat from nltk.chat.rude import rude_chat from nltk.chat.suntsu import suntsu_chat from nltk.chat.util import Chat from nltk.chat.zen import zen_chat bots = [ (eliza_chat, "Eliza (psycho-babble)"), (iesha_chat, "Iesha (teen anime junky)"), (rude_chat, "Rude (abusive bot)"), (suntsu_chat, "Suntsu (Chinese sayings)"), (zen_chat, "Zen (gems of wisdom)"), ] def chatbots(): print("Which chatbot would you like to talk to?") botcount = len(bots) for i in range(botcount): print(" %d: %s" % (i + 1, bots[i][1])) while True: choice = input(f"\nEnter a number in the range 1-{botcount}: ").strip() if choice.isdigit() and (int(choice) - 1) in range(botcount): break else: print(" Error: bad chatbot number") chatbot = bots[int(choice) - 1][0] chatbot()
natural language toolkit eliza c 20012023 nltk project s steven bird stevenbird1gmail com edward loper edlopergmail com url https www nltk org for license information see license txt based on an eliza implementation by joe strout joestrout net jeff epler jeplerinetnebr com and jez higgins mailto jezjezuk co uk a translation table used to convert things you say into things the computer says back e g i am you are a table of response pairs where each pair consists of a regular expression and a list of possible responses with groupmacros labelled as 1 2 natural language toolkit eliza c 2001 2023 nltk project s steven bird stevenbird1 gmail com edward loper edloper gmail com url https www nltk org for license information see license txt based on an eliza implementation by joe strout joe strout net jeff epler jepler inetnebr com and jez higgins mailto jez jezuk co uk a translation table used to convert things you say into things the computer says back e g i am you are a table of response pairs where each pair consists of a regular expression and a list of possible responses with group macros labelled as 1 2
from nltk.chat.util import Chat, reflections pairs = ( ( r"I need (.*)", ( "Why do you need %1?", "Would it really help you to get %1?", "Are you sure you need %1?", ), ), ( r"Why don\'t you (.*)", ( "Do you really think I don't %1?", "Perhaps eventually I will %1.", "Do you really want me to %1?", ), ), ( r"Why can\'t I (.*)", ( "Do you think you should be able to %1?", "If you could %1, what would you do?", "I don't know -- why can't you %1?", "Have you really tried?", ), ), ( r"I can\'t (.*)", ( "How do you know you can't %1?", "Perhaps you could %1 if you tried.", "What would it take for you to %1?", ), ), ( r"I am (.*)", ( "Did you come to me because you are %1?", "How long have you been %1?", "How do you feel about being %1?", ), ), ( r"I\'m (.*)", ( "How does being %1 make you feel?", "Do you enjoy being %1?", "Why do you tell me you're %1?", "Why do you think you're %1?", ), ), ( r"Are you (.*)", ( "Why does it matter whether I am %1?", "Would you prefer it if I were not %1?", "Perhaps you believe I am %1.", "I may be %1 -- what do you think?", ), ), ( r"What (.*)", ( "Why do you ask?", "How would an answer to that help you?", "What do you think?", ), ), ( r"How (.*)", ( "How do you suppose?", "Perhaps you can answer your own question.", "What is it you're really asking?", ), ), ( r"Because (.*)", ( "Is that the real reason?", "What other reasons come to mind?", "Does that reason apply to anything else?", "If %1, what else must be true?", ), ), ( r"(.*) sorry (.*)", ( "There are many times when no apology is needed.", "What feelings do you have when you apologize?", ), ), ( r"Hello(.*)", ( "Hello... I'm glad you could drop by today.", "Hi there... how are you today?", "Hello, how are you feeling today?", ), ), ( r"I think (.*)", ("Do you doubt %1?", "Do you really think so?", "But you're not sure %1?"), ), ( r"(.*) friend (.*)", ( "Tell me more about your friends.", "When you think of a friend, what comes to mind?", "Why don't you tell me about a childhood friend?", ), ), (r"Yes", ("You seem quite sure.", "OK, but can you elaborate a bit?")), ( r"(.*) computer(.*)", ( "Are you really talking about me?", "Does it seem strange to talk to a computer?", "How do computers make you feel?", "Do you feel threatened by computers?", ), ), ( r"Is it (.*)", ( "Do you think it is %1?", "Perhaps it's %1 -- what do you think?", "If it were %1, what would you do?", "It could well be that %1.", ), ), ( r"It is (.*)", ( "You seem very certain.", "If I told you that it probably isn't %1, what would you feel?", ), ), ( r"Can you (.*)", ( "What makes you think I can't %1?", "If I could %1, then what?", "Why do you ask if I can %1?", ), ), ( r"Can I (.*)", ( "Perhaps you don't want to %1.", "Do you want to be able to %1?", "If you could %1, would you?", ), ), ( r"You are (.*)", ( "Why do you think I am %1?", "Does it please you to think that I'm %1?", "Perhaps you would like me to be %1.", "Perhaps you're really talking about yourself?", ), ), ( r"You\'re (.*)", ( "Why do you say I am %1?", "Why do you think I am %1?", "Are we talking about you, or me?", ), ), ( r"I don\'t (.*)", ("Don't you really %1?", "Why don't you %1?", "Do you want to %1?"), ), ( r"I feel (.*)", ( "Good, tell me more about these feelings.", "Do you often feel %1?", "When do you usually feel %1?", "When you feel %1, what do you do?", ), ), ( r"I have (.*)", ( "Why do you tell me that you've %1?", "Have you really %1?", "Now that you have %1, what will you do next?", ), ), ( r"I would (.*)", ( "Could you explain why you would %1?", "Why would you %1?", "Who else knows that you would %1?", ), ), ( r"Is there (.*)", ( "Do you think there is %1?", "It's likely that there is %1.", "Would you like there to be %1?", ), ), ( r"My (.*)", ( "I see, your %1.", "Why do you say that your %1?", "When your %1, how do you feel?", ), ), ( r"You (.*)", ( "We should be discussing you, not me.", "Why do you say that about me?", "Why do you care whether I %1?", ), ), (r"Why (.*)", ("Why don't you tell me the reason why %1?", "Why do you think %1?")), ( r"I want (.*)", ( "What would it mean to you if you got %1?", "Why do you want %1?", "What would you do if you got %1?", "If you got %1, then what would you do?", ), ), ( r"(.*) mother(.*)", ( "Tell me more about your mother.", "What was your relationship with your mother like?", "How do you feel about your mother?", "How does this relate to your feelings today?", "Good family relations are important.", ), ), ( r"(.*) father(.*)", ( "Tell me more about your father.", "How did your father make you feel?", "How do you feel about your father?", "Does your relationship with your father relate to your feelings today?", "Do you have trouble showing affection with your family?", ), ), ( r"(.*) child(.*)", ( "Did you have close friends as a child?", "What is your favorite childhood memory?", "Do you remember any dreams or nightmares from childhood?", "Did the other children sometimes tease you?", "How do you think your childhood experiences relate to your feelings today?", ), ), ( r"(.*)\?", ( "Why do you ask that?", "Please consider whether you can answer your own question.", "Perhaps the answer lies within yourself?", "Why don't you tell me?", ), ), ( r"quit", ( "Thank you for talking with me.", "Good-bye.", "Thank you, that will be $150. Have a good day!", ), ), ( r"(.*)", ( "Please tell me more.", "Let's change focus a bit... Tell me about your family.", "Can you elaborate on that?", "Why do you say that %1?", "I see.", "Very interesting.", "%1.", "I see. And what does that tell you?", "How does that make you feel?", "How do you feel when you say that?", ), ), ) eliza_chatbot = Chat(pairs, reflections) def eliza_chat(): print("Therapist\n---------") print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation. Enter "quit" when done.') print("=" * 72) print("Hello. How are you feeling today?") eliza_chatbot.converse() def demo(): eliza_chat() if __name__ == "__main__": demo()
natural language toolkit teen chatbot c 20012023 nltk project selina dennis sjmdcsse unimelb edu au url https www nltk org for license information see license txt this chatbot is a tongueincheek take on the average teen anime junky that frequents yahoomessenger or msnm all spelling mistakes and flawed grammar are intentional note 12etc are used without spaces prior as the chat bot seems to add a superfluous space when matching natural language toolkit teen chatbot c 2001 2023 nltk project selina dennis sjmd csse unimelb edu au url https www nltk org for license information see license txt this chatbot is a tongue in cheek take on the average teen anime junky that frequents yahoomessenger or msnm all spelling mistakes and flawed grammar are intentional note 1 2 etc are used without spaces prior as the chat bot seems to add a superfluous space when matching
from nltk.chat.util import Chat reflections = { "am": "r", "was": "were", "i": "u", "i'd": "u'd", "i've": "u'v", "ive": "u'v", "i'll": "u'll", "my": "ur", "are": "am", "you're": "im", "you've": "ive", "you'll": "i'll", "your": "my", "yours": "mine", "you": "me", "u": "me", "ur": "my", "urs": "mine", "me": "u", } pairs = ( ( r"I\'m (.*)", ( "ur%1?? that's so cool! kekekekeke ^_^ tell me more!", "ur%1? neat!! kekeke >_<", ), ), ( r"(.*) don\'t you (.*)", ( r"u think I can%2??! really?? kekeke \<_\<", "what do u mean%2??!", "i could if i wanted, don't you think!! kekeke", ), ), (r"ye[as] [iI] (.*)", ("u%1? cool!! how?", "how come u%1??", "u%1? so do i!!")), ( r"do (you|u) (.*)\??", ("do i%2? only on tuesdays! kekeke *_*", "i dunno! do u%2??"), ), ( r"(.*)\?", ( "man u ask lots of questions!", "booooring! how old r u??", "boooooring!! ur not very fun", ), ), ( r"(cos|because) (.*)", ("hee! i don't believe u! >_<", "nuh-uh! >_<", "ooooh i agree!"), ), ( r"why can\'t [iI] (.*)", ( "i dunno! y u askin me for!", "try harder, silly! hee! ^_^", "i dunno! but when i can't%1 i jump up and down!", ), ), ( r"I can\'t (.*)", ( "u can't what??! >_<", "that's ok! i can't%1 either! kekekekeke ^_^", "try harder, silly! hee! ^&^", ), ), ( r"(.*) (like|love|watch) anime", ( "omg i love anime!! do u like sailor moon??! ^&^", "anime yay! anime rocks sooooo much!", "oooh anime! i love anime more than anything!", "anime is the bestest evar! evangelion is the best!", "hee anime is the best! do you have ur fav??", ), ), ( r"I (like|love|watch|play) (.*)", ("yay! %2 rocks!", "yay! %2 is neat!", "cool! do u like other stuff?? ^_^"), ), ( r"anime sucks|(.*) (hate|detest) anime", ( "ur a liar! i'm not gonna talk to u nemore if u h8 anime *;*", "no way! anime is the best ever!", "nuh-uh, anime is the best!", ), ), ( r"(are|r) (you|u) (.*)", ("am i%1??! how come u ask that!", "maybe! y shud i tell u?? kekeke >_>"), ), ( r"what (.*)", ("hee u think im gonna tell u? .v.", "booooooooring! ask me somethin else!"), ), (r"how (.*)", ("not tellin!! kekekekekeke ^_^",)), (r"(hi|hello|hey) (.*)", ("hi!!! how r u!!",)), ( r"quit", ( "mom says i have to go eat dinner now :,( bye!!", "awww u have to go?? see u next time!!", "how to see u again soon! ^_^", ), ), ( r"(.*)", ( "ur funny! kekeke", "boooooring! talk about something else! tell me wat u like!", "do u like anime??", "do u watch anime? i like sailor moon! ^_^", "i wish i was a kitty!! kekekeke ^_^", ), ), ) iesha_chatbot = Chat(pairs, reflections) def iesha_chat(): print("Iesha the TeenBoT\n---------") print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation. Enter "quit" when done.') print("=" * 72) print("hi!! i'm iesha! who r u??!") iesha_chatbot.converse() def demo(): iesha_chat() if __name__ == "__main__": demo()
natural language toolkit rude chatbot c 20012023 nltk project peter spiller pspillercsse unimelb edu au url https www nltk org for license information see license txt natural language toolkit rude chatbot c 2001 2023 nltk project peter spiller pspiller csse unimelb edu au url https www nltk org for license information see license txt
from nltk.chat.util import Chat, reflections pairs = ( ( r"We (.*)", ( "What do you mean, 'we'?", "Don't include me in that!", "I wouldn't be so sure about that.", ), ), ( r"You should (.*)", ("Don't tell me what to do, buddy.", "Really? I should, should I?"), ), ( r"You\'re(.*)", ( "More like YOU'RE %1!", "Hah! Look who's talking.", "Come over here and tell me I'm %1.", ), ), ( r"You are(.*)", ( "More like YOU'RE %1!", "Hah! Look who's talking.", "Come over here and tell me I'm %1.", ), ), ( r"I can\'t(.*)", ( "You do sound like the type who can't %1.", "Hear that splashing sound? That's my heart bleeding for you.", "Tell somebody who might actually care.", ), ), ( r"I think (.*)", ( "I wouldn't think too hard if I were you.", "You actually think? I'd never have guessed...", ), ), ( r"I (.*)", ( "I'm getting a bit tired of hearing about you.", "How about we talk about me instead?", "Me, me, me... Frankly, I don't care.", ), ), ( r"How (.*)", ( "How do you think?", "Take a wild guess.", "I'm not even going to dignify that with an answer.", ), ), (r"What (.*)", ("Do I look like an encyclopedia?", "Figure it out yourself.")), ( r"Why (.*)", ( "Why not?", "That's so obvious I thought even you'd have already figured it out.", ), ), ( r"(.*)shut up(.*)", ( "Make me.", "Getting angry at a feeble NLP assignment? Somebody's losing it.", "Say that again, I dare you.", ), ), ( r"Shut up(.*)", ( "Make me.", "Getting angry at a feeble NLP assignment? Somebody's losing it.", "Say that again, I dare you.", ), ), ( r"Hello(.*)", ("Oh good, somebody else to talk to. Joy.", "'Hello'? How original..."), ), ( r"(.*)", ( "I'm getting bored here. Become more interesting.", "Either become more thrilling or get lost, buddy.", "Change the subject before I die of fatal boredom.", ), ), ) rude_chatbot = Chat(pairs, reflections) def rude_chat(): print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation. Enter "quit" when done.') print("=" * 72) print("I suppose I should say hello.") rude_chatbot.converse() def demo(): rude_chat() if __name__ == "__main__": demo()
natural language toolkit sun tsubot c 20012023 nltk project sam huston 2007 url https www nltk org for license information see license txt tsu bot responds to all queries with a sun tsu sayings quoted from sun tsu s the art of war translated by lionel giles m a 1910 hosted by the gutenberg project https www gutenberg org natural language toolkit sun tsu bot c 2001 2023 nltk project sam huston 2007 url https www nltk org for license information see license txt tsu bot responds to all queries with a sun tsu sayings quoted from sun tsu s the art of war translated by lionel giles m a 1910 hosted by the gutenberg project https www gutenberg org
from nltk.chat.util import Chat, reflections pairs = ( (r"quit", ("Good-bye.", "Plan well", "May victory be your future")), ( r"[^\?]*\?", ( "Please consider whether you can answer your own question.", "Ask me no questions!", ), ), ( r"[0-9]+(.*)", ( "It is the rule in war, if our forces are ten to the enemy's one, to surround him; if five to one, to attack him; if twice as numerous, to divide our army into two.", "There are five essentials for victory", ), ), ( r"[A-Ca-c](.*)", ( "The art of war is of vital importance to the State.", "All warfare is based on deception.", "If your opponent is secure at all points, be prepared for him. If he is in superior strength, evade him.", "If the campaign is protracted, the resources of the State will not be equal to the strain.", "Attack him where he is unprepared, appear where you are not expected.", "There is no instance of a country having benefited from prolonged warfare.", ), ), ( r"[D-Fd-f](.*)", ( "The skillful soldier does not raise a second levy, neither are his supply-wagons loaded more than twice.", "Bring war material with you from home, but forage on the enemy.", "In war, then, let your great object be victory, not lengthy campaigns.", "To fight and conquer in all your battles is not supreme excellence; supreme excellence consists in breaking the enemy's resistance without fighting.", ), ), ( r"[G-Ig-i](.*)", ( "Heaven signifies night and day, cold and heat, times and seasons.", "It is the rule in war, if our forces are ten to the enemy's one, to surround him; if five to one, to attack him; if twice as numerous, to divide our army into two.", "The good fighters of old first put themselves beyond the possibility of defeat, and then waited for an opportunity of defeating the enemy.", "One may know how to conquer without being able to do it.", ), ), ( r"[J-Lj-l](.*)", ( "There are three ways in which a ruler can bring misfortune upon his army.", "By commanding the army to advance or to retreat, being ignorant of the fact that it cannot obey. This is called hobbling the army.", "By attempting to govern an army in the same way as he administers a kingdom, being ignorant of the conditions which obtain in an army. This causes restlessness in the soldier's minds.", "By employing the officers of his army without discrimination, through ignorance of the military principle of adaptation to circumstances. This shakes the confidence of the soldiers.", "There are five essentials for victory", "He will win who knows when to fight and when not to fight.", "He will win who knows how to handle both superior and inferior forces.", "He will win whose army is animated by the same spirit throughout all its ranks.", "He will win who, prepared himself, waits to take the enemy unprepared.", "He will win who has military capacity and is not interfered with by the sovereign.", ), ), ( r"[M-Om-o](.*)", ( "If you know the enemy and know yourself, you need not fear the result of a hundred battles.", "If you know yourself but not the enemy, for every victory gained you will also suffer a defeat.", "If you know neither the enemy nor yourself, you will succumb in every battle.", "The control of a large force is the same principle as the control of a few men: it is merely a question of dividing up their numbers.", ), ), ( r"[P-Rp-r](.*)", ( "Security against defeat implies defensive tactics; ability to defeat the enemy means taking the offensive.", "Standing on the defensive indicates insufficient strength; attacking, a superabundance of strength.", "He wins his battles by making no mistakes. Making no mistakes is what establishes the certainty of victory, for it means conquering an enemy that is already defeated.", "A victorious army opposed to a routed one, is as a pound's weight placed in the scale against a single grain.", "The onrush of a conquering force is like the bursting of pent-up waters into a chasm a thousand fathoms deep.", ), ), ( r"[S-Us-u](.*)", ( "What the ancients called a clever fighter is one who not only wins, but excels in winning with ease.", "Hence his victories bring him neither reputation for wisdom nor credit for courage.", "Hence the skillful fighter puts himself into a position which makes defeat impossible, and does not miss the moment for defeating the enemy.", "In war the victorious strategist only seeks battle after the victory has been won, whereas he who is destined to defeat first fights and afterwards looks for victory.", "There are not more than five musical notes, yet the combinations of these five give rise to more melodies than can ever be heard.", "Appear at points which the enemy must hasten to defend; march swiftly to places where you are not expected.", ), ), ( r"[V-Zv-z](.*)", ( "It is a matter of life and death, a road either to safety or to ruin.", "Hold out baits to entice the enemy. Feign disorder, and crush him.", "All men can see the tactics whereby I conquer, but what none can see is the strategy out of which victory is evolved.", "Do not repeat the tactics which have gained you one victory, but let your methods be regulated by the infinite variety of circumstances.", "So in war, the way is to avoid what is strong and to strike at what is weak.", "Just as water retains no constant shape, so in warfare there are no constant conditions.", ), ), (r"(.*)", ("Your statement insults me.", "")), ) suntsu_chatbot = Chat(pairs, reflections) def suntsu_chat(): print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation. Enter "quit" when done.') print("=" * 72) print("You seek enlightenment?") suntsu_chatbot.converse() def demo(): suntsu_chat() if __name__ == "__main__": demo()
natural language toolkit chatbot utilities c 20012023 nltk project s steven bird stevenbird1gmail com url https www nltk org for license information see license txt based on an eliza implementation by joe strout joestrout net jeff epler jeplerinetnebr com and jez higgins jezjezuk co uk initialize the chatbot pairs is a list of patterns and responses each pattern is a regular expression matching the user s statement or question e g r i like for each such pattern a list of possible responses is given e g why do you like 1 did you ever dislike 1 material which is matched by parenthesized sections of the patterns e g is mapped to the numbered positions in the responses e g 1 type pairs list of tuple param pairs the patterns and responses type reflections dict param reflections a mapping between first and second person expressions rtype none substitute words in the string according to the specified reflections e g i m you are type str str param str the string to be mapped rtype str generate a response to the user input type str str param str the string to be mapped rtype str check each pattern did the pattern match fix munged punctuation at the end hold a conversation with a chatbot natural language toolkit chatbot utilities c 2001 2023 nltk project s steven bird stevenbird1 gmail com url https www nltk org for license information see license txt based on an eliza implementation by joe strout joe strout net jeff epler jepler inetnebr com and jez higgins jez jezuk co uk initialize the chatbot pairs is a list of patterns and responses each pattern is a regular expression matching the user s statement or question e g r i like for each such pattern a list of possible responses is given e g why do you like 1 did you ever dislike 1 material which is matched by parenthesized sections of the patterns e g is mapped to the numbered positions in the responses e g 1 type pairs list of tuple param pairs the patterns and responses type reflections dict param reflections a mapping between first and second person expressions rtype none substitute words in the string according to the specified reflections e g i m you are type str str param str the string to be mapped rtype str generate a response to the user input type str str param str the string to be mapped rtype str check each pattern did the pattern match pick a random response process wildcards fix munged punctuation at the end hold a conversation with a chatbot
import random import re reflections = { "i am": "you are", "i was": "you were", "i": "you", "i'm": "you are", "i'd": "you would", "i've": "you have", "i'll": "you will", "my": "your", "you are": "I am", "you were": "I was", "you've": "I have", "you'll": "I will", "your": "my", "yours": "mine", "you": "me", "me": "you", } class Chat: def __init__(self, pairs, reflections={}): self._pairs = [(re.compile(x, re.IGNORECASE), y) for (x, y) in pairs] self._reflections = reflections self._regex = self._compile_reflections() def _compile_reflections(self): sorted_refl = sorted(self._reflections, key=len, reverse=True) return re.compile( r"\b({})\b".format("|".join(map(re.escape, sorted_refl))), re.IGNORECASE ) def _substitute(self, str): return self._regex.sub( lambda mo: self._reflections[mo.string[mo.start() : mo.end()]], str.lower() ) def _wildcards(self, response, match): pos = response.find("%") while pos >= 0: num = int(response[pos + 1 : pos + 2]) response = ( response[:pos] + self._substitute(match.group(num)) + response[pos + 2 :] ) pos = response.find("%") return response def respond(self, str): for (pattern, response) in self._pairs: match = pattern.match(str) if match: resp = random.choice(response) resp = self._wildcards(resp, match) if resp[-2:] == "?.": resp = resp[:-2] + "." if resp[-2:] == "??": resp = resp[:-2] + "?" return resp def converse(self, quit="quit"): user_input = "" while user_input != quit: user_input = quit try: user_input = input(">") except EOFError: print(user_input) if user_input: while user_input[-1] in "!.": user_input = user_input[:-1] print(self.respond(user_input))
natural language toolkit zen chatbot c 20012023 nltk project amy holland amyrhcsse unimelb edu au url https www nltk org for license information see license txt zen chatbot talks in gems of zen wisdom this is a sample conversation with zen chatbot zc welcome my child me good afternoon zc ask the question you have come to ask me how can i achieve enlightenment zc how do you suppose me through meditation zc form is emptiness and emptiness form me how can i empty my mind of worldly troubles zc will an answer to that really help in your search for enlightenment me yes zc it is better to be right than to be certain me i seek truth and wisdom zc the search for truth is a long journey me are you sure zc maybe sure maybe not sure the chatbot structure is based on that of chat eliza thus it uses a translation table to convert from question to response i e i am you are of course since zen chatbot does not understand the meaning of any words responses are very limited zen chatbot will usually answer very vaguely or respond to a question by asking a different question in much the same way as eliza responses are matched top to bottom so nonspecific matches occur later for each match a list of possible responses is provided zen chatbot opens with the line welcome my child the usual response will be a greeting problem good matches good morning good day etc but also good grief and other sentences starting with the word good that may not be a greeting i need and i want can be followed by a thing eg help or an action eg to see you this is a problem with this style of response person i need you chatbot me can be achieved by hard work and dedication of the mind i e you is not really a thing that can be mapped this way so this interpretation only makes sense for some inputs why questions are separated into three types why i e g why am i here why do i like cake why you e g why are you here why won t you tell me why e g why is the sky blue problems person why can t you tell me chatbot are you sure i tell you this style works for positives e g why do you like cake but does not work for negatives e g why don t you like cake e g are you listening are you a duck e g am i a duck am i going to die what questions e g what time is it problems person what do you want chatbot seek truth not what do me want how questions e g how do you do can questions e g can you run can you come over here please can questions e g can i have some cake can i know truth e g it is raining implies the speaker is certain of a fact e g is there a doctor in the house e g is it possible is this true nonspecific question expression of hate of form i hate you or kelly hates cheese statement containing the word truth desire to do an action e g i want to go shopping desire for an object e g i want a pony e g i can t wait or i can t do this i think indicates uncertainty e g i think so problem exceptions e g i think therefore i am i feel emotionssicklightheaded exclaimation mark indicating emotion e g wow or no because statement e g because i said so yes or no raise an issue of certaintycorrectness sentence containing word love sentence containing word understand r i me my person is talking about themself this breaks down when words contain these eg thyme irish you starting a sentence e g you stink say goodbye with some extra zen wisdom fall through case when stumped respond with generic zen wisdom natural language toolkit zen chatbot c 2001 2023 nltk project amy holland amyrh csse unimelb edu au url https www nltk org for license information see license txt zen chatbot talks in gems of zen wisdom this is a sample conversation with zen chatbot zc welcome my child me good afternoon zc ask the question you have come to ask me how can i achieve enlightenment zc how do you suppose me through meditation zc form is emptiness and emptiness form me how can i empty my mind of worldly troubles zc will an answer to that really help in your search for enlightenment me yes zc it is better to be right than to be certain me i seek truth and wisdom zc the search for truth is a long journey me are you sure zc maybe sure maybe not sure the chatbot structure is based on that of chat eliza thus it uses a translation table to convert from question to response i e i am you are of course since zen chatbot does not understand the meaning of any words responses are very limited zen chatbot will usually answer very vaguely or respond to a question by asking a different question in much the same way as eliza responses are matched top to bottom so non specific matches occur later for each match a list of possible responses is provided zen chatbot opens with the line welcome my child the usual response will be a greeting problem good matches good morning good day etc but also good grief and other sentences starting with the word good that may not be a greeting i need and i want can be followed by a thing eg help or an action eg to see you this is a problem with this style of response person i need you chatbot me can be achieved by hard work and dedication of the mind i e you is not really a thing that can be mapped this way so this interpretation only makes sense for some inputs why questions are separated into three types why i e g why am i here why do i like cake why you e g why are you here why won t you tell me why e g why is the sky blue problems person why can t you tell me chatbot are you sure i tell you this style works for positives e g why do you like cake but does not work for negatives e g why don t you like cake e g are you listening are you a duck e g am i a duck am i going to die what questions e g what time is it problems person what do you want chatbot seek truth not what do me want how questions e g how do you do can questions e g can you run can you come over here please can questions e g can i have some cake can i know truth e g it is raining implies the speaker is certain of a fact e g is there a doctor in the house e g is it possible is this true non specific question expression of hate of form i hate you or kelly hates cheese statement containing the word truth desire to do an action e g i want to go shopping desire for an object e g i want a pony e g i can t wait or i can t do this i think indicates uncertainty e g i think so problem exceptions e g i think therefore i am i feel emotions sick light headed exclaimation mark indicating emotion e g wow or no because statement e g because i said so yes or no raise an issue of certainty correctness sentence containing word love sentence containing word understand r i me my person is talking about themself this breaks down when words contain these eg thyme irish you starting a sentence e g you stink say goodbye with some extra zen wisdom fall through case when stumped respond with generic zen wisdom
from nltk.chat.util import Chat, reflections responses = ( ( r"(hello(.*))|(good [a-zA-Z]+)", ( "The path to enlightenment is often difficult to see.", "Greetings. I sense your mind is troubled. Tell me of your troubles.", "Ask the question you have come to ask.", "Hello. Do you seek englightenment?", ), ), ( r"i need (.*)", ( "%1 can be achieved by hard work and dedication of the mind.", "%1 is not a need, but a desire of the mind. Clear your mind of such concerns.", "Focus your mind on%1, and you will find what you need.", ), ), ( r"i want (.*)", ( "Desires of the heart will distract you from the path to enlightenment.", "Will%1 help you attain enlightenment?", "Is%1 a desire of the mind, or of the heart?", ), ), (r"why (.*) i (.*)\?", ("You%1%2?", "Perhaps you only think you%1%2")), (r"why (.*) you(.*)\?", ("Why%1 you%2?", "%2 I%1", "Are you sure I%2?")), (r"why (.*)\?", ("I cannot tell you why%1.", "Why do you think %1?")), ( r"are you (.*)\?", ("Maybe%1, maybe not%1.", "Whether I am%1 or not is God's business."), ), ( r"am i (.*)\?", ("Perhaps%1, perhaps not%1.", "Whether you are%1 or not is not for me to say."), ), (r"what (.*)\?", ("Seek truth, not what%1.", "What%1 should not concern you.")), ( r"how (.*)\?", ( "How do you suppose?", "Will an answer to that really help in your search for enlightenment?", "Ask yourself not how, but why.", ), ), ( r"can you (.*)\?", ( "I probably can, but I may not.", "Maybe I can%1, and maybe I cannot.", "I can do all, and I can do nothing.", ), ), ( r"can i (.*)\?", ( "You can%1 if you believe you can%1, and have a pure spirit.", "Seek truth and you will know if you can%1.", ), ), ( r"it is (.*)", ( "How can you be certain that%1, when you do not even know yourself?", "Whether it is%1 or not does not change the way the world is.", ), ), ( r"is there (.*)\?", ("There is%1 if you believe there is.", "It is possible that there is%1."), ), (r"is(.*)\?", ("%1 is not relevant.", "Does this matter?")), ( r"(.*)\?", ( "Do you think %1?", "You seek the truth. Does the truth seek you?", "If you intentionally pursue the answers to your questions, the answers become hard to see.", "The answer to your question cannot be told. It must be experienced.", ), ), ( r"(.*) (hate[s]?)|(dislike[s]?)|(don\'t like)(.*)", ( "Perhaps it is not about hating %2, but about hate from within.", "Weeds only grow when we dislike them", "Hate is a very strong emotion.", ), ), ( r"(.*) truth(.*)", ( "Seek truth, and truth will seek you.", "Remember, it is not the spoon which bends - only yourself.", "The search for truth is a long journey.", ), ), ( r"i want to (.*)", ("You may %1 if your heart truly desires to.", "You may have to %1."), ), ( r"i want (.*)", ( "Does your heart truly desire %1?", "Is this a desire of the heart, or of the mind?", ), ), ( r"i can\'t (.*)", ( "What we can and can't do is a limitation of the mind.", "There are limitations of the body, and limitations of the mind.", "Have you tried to%1 with a clear mind?", ), ), ( r"i think (.*)", ( "Uncertainty in an uncertain world.", "Indeed, how can we be certain of anything in such uncertain times.", "Are you not, in fact, certain that%1?", ), ), ( r"i feel (.*)", ( "Your body and your emotions are both symptoms of your mind." "What do you believe is the root of such feelings?", "Feeling%1 can be a sign of your state-of-mind.", ), ), ( r"(.*)!", ( "I sense that you are feeling emotional today.", "You need to calm your emotions.", ), ), ( r"because (.*)", ( "Does knowning the reasons behind things help you to understand" " the things themselves?", "If%1, what else must be true?", ), ), ( r"(yes)|(no)", ( "Is there certainty in an uncertain world?", "It is better to be right than to be certain.", ), ), ( r"(.*)love(.*)", ( "Think of the trees: they let the birds perch and fly with no intention to call them when they come, and no longing for their return when they fly away. Let your heart be like the trees.", "Free love!", ), ), ( r"(.*)understand(.*)", ( "If you understand, things are just as they are;" " if you do not understand, things are just as they are.", "Imagination is more important than knowledge.", ), ), ( r"(.*)(me )|( me)|(my)|(mine)|(i)(.*)", ( "'I', 'me', 'my'... these are selfish expressions.", "Have you ever considered that you might be a selfish person?", "Try to consider others, not just yourself.", "Think not just of yourself, but of others.", ), ), ( r"you (.*)", ("My path is not of concern to you.", "I am but one, and you but one more."), ), ( r"exit", ( "Farewell. The obstacle is the path.", "Farewell. Life is a journey, not a destination.", "Good bye. We are cups, constantly and quietly being filled." "\nThe trick is knowning how to tip ourselves over and let the beautiful stuff out.", ), ), ( r"(.*)", ( "When you're enlightened, every word is wisdom.", "Random talk is useless.", "The reverse side also has a reverse side.", "Form is emptiness, and emptiness is form.", "I pour out a cup of water. Is the cup empty?", ), ), ) zen_chatbot = Chat(responses, reflections) def zen_chat(): print("*" * 75) print("Zen Chatbot!".center(75)) print("*" * 75) print('"Look beyond mere words and letters - look into your mind"'.center(75)) print("* Talk your way to truth with Zen Chatbot.") print("* Type 'quit' when you have had enough.") print("*" * 75) print("Welcome, my child.") zen_chatbot.converse() def demo(): zen_chat() if __name__ == "__main__": demo()
natural language toolkit chunkers c 20012023 nltk project steven bird stevenbird1gmail com edward loper edlopergmail com url https www nltk org for license information see license txt classes and interfaces for identifying nonoverlapping linguistic groups such as base noun phrases in unrestricted text this task is called chunk parsing or chunking and the identified groups are called chunks the chunked text is represented using a shallow tree called a chunk structure a chunk structure is a tree containing tokens and chunks where each chunk is a subtree containing only tokens for example the chunk structure for base noun phrase chunks in the sentence i saw the big dog on the hill is sentence np i saw np the big dog on np the hill to convert a chunk structure back to a list of tokens simply use the chunk structure s leaves method this module defines chunkparseri a standard interface for chunking texts and regexpchunkparser a regularexpression based implementation of that interface it also defines chunkscore a utility class for scoring chunk parsers regexpchunkparser regexpchunkparser is an implementation of the chunk parser interface that uses regularexpressions over tags to chunk a text its parse method first constructs a chunkstring which encodes a particular chunking of the input text initially nothing is chunked parse regexpchunkparser then applies a sequence of regexpchunkrule rules to the chunkstring each of which modifies the chunking that it encodes finally the chunkstring is transformed back into a chunk structure which is returned regexpchunkparser can only be used to chunk a single kind of phrase for example you can use an regexpchunkparser to chunk the noun phrases in a text or the verb phrases in a text but you can not use it to simultaneously chunk both noun phrases and verb phrases in the same text this is a limitation of regexpchunkparser not of chunk parsers in general regexpchunkrules a regexpchunkrule is a transformational rule that updates the chunking of a text by modifying its chunkstring each regexpchunkrule defines the apply method which modifies the chunking encoded by a chunkstring the regexpchunkrule class itself can be used to implement any transformational rule based on regular expressions there are also a number of subclasses which can be used to implement simpler types of rules chunkrule chunks anything that matches a given regular expression striprule strips anything that matches a given regular expression unchunkrule will unchunk any chunk that matches a given regular expression mergerule can be used to merge two contiguous chunks splitrule can be used to split a single chunk into two smaller chunks expandleftrule will expand a chunk to incorporate new unchunked material on the left expandrightrule will expand a chunk to incorporate new unchunked material on the right tag patterns a regexpchunkrule uses a modified version of regular expression patterns called tag patterns tag patterns are used to match sequences of tags examples of tag patterns are r dtjjnn r nn r nn the differences between regular expression patterns and tag patterns are in tag patterns and act as parentheses so nn matches one or more repetitions of nn not nn followed by one or more repetitions of whitespace in tag patterns is ignored so dt nn is equivalent to dtnn in tag patterns is equivalent to so nn matches any single tag starting with nn the function tagpattern2repattern can be used to transform a tag pattern to an equivalent regular expression pattern efficiency preliminary tests indicate that regexpchunkparser can chunk at a rate of about 300 tokenssecond with a moderately complex rule set there may be problems if regexpchunkparser is used with more than 5 000 tokens at a time in particular evaluation of some regular expressions may cause the python regular expression engine to exceed its maximum recursion depth we have attempted to minimize these problems but it is impossible to avoid them completely we therefore recommend that you apply the chunk parser to a single sentence at a time emacs tip if you evaluate the following elisp expression in emacs it will colorize a chunkstring when you use an interactive python shell with emacs or xemacs cc let defconst comintmodefontlockkeywords 0 fontlockreferenceface 0 fontlockfunctionnameface addhook comintmodehook lambda turnonfontlock you can evaluate this code by copying it to a temporary buffer placing the cursor after the last close parenthesis and typing cx ce you should evaluate it before running the interactive session the change will last until you close emacs unresolved issues if we use the re module for regular expressions python s regular expression engine generates maximum recursion depth exceeded errors when processing very large texts even for regular expressions that should not require any recursion we therefore use the pre module instead but note that pre does not include unicode support so this module will not work with unicode strings note also that pre regular expressions are not quite as advanced as re ones e g no leftward zerolength assertions type chunktagpattern regexp var chunktagpattern a regular expression to test whether a tag pattern is valid standard treebank pos tagger use nltk s currently recommended named entity chunker to chunk the given list of tagged tokens use nltk s currently recommended named entity chunker to chunk the given list of tagged sentences each consisting of a list of tagged tokens natural language toolkit chunkers c 2001 2023 nltk project steven bird stevenbird1 gmail com edward loper edloper gmail com url https www nltk org for license information see license txt classes and interfaces for identifying non overlapping linguistic groups such as base noun phrases in unrestricted text this task is called chunk parsing or chunking and the identified groups are called chunks the chunked text is represented using a shallow tree called a chunk structure a chunk structure is a tree containing tokens and chunks where each chunk is a subtree containing only tokens for example the chunk structure for base noun phrase chunks in the sentence i saw the big dog on the hill is sentence np i saw np the big dog on np the hill to convert a chunk structure back to a list of tokens simply use the chunk structure s leaves method this module defines chunkparseri a standard interface for chunking texts and regexpchunkparser a regular expression based implementation of that interface it also defines chunkscore a utility class for scoring chunk parsers regexpchunkparser regexpchunkparser is an implementation of the chunk parser interface that uses regular expressions over tags to chunk a text its parse method first constructs a chunkstring which encodes a particular chunking of the input text initially nothing is chunked parse regexpchunkparser then applies a sequence of regexpchunkrule rules to the chunkstring each of which modifies the chunking that it encodes finally the chunkstring is transformed back into a chunk structure which is returned regexpchunkparser can only be used to chunk a single kind of phrase for example you can use an regexpchunkparser to chunk the noun phrases in a text or the verb phrases in a text but you can not use it to simultaneously chunk both noun phrases and verb phrases in the same text this is a limitation of regexpchunkparser not of chunk parsers in general regexpchunkrules a regexpchunkrule is a transformational rule that updates the chunking of a text by modifying its chunkstring each regexpchunkrule defines the apply method which modifies the chunking encoded by a chunkstring the regexpchunkrule class itself can be used to implement any transformational rule based on regular expressions there are also a number of subclasses which can be used to implement simpler types of rules chunkrule chunks anything that matches a given regular expression striprule strips anything that matches a given regular expression unchunkrule will un chunk any chunk that matches a given regular expression mergerule can be used to merge two contiguous chunks splitrule can be used to split a single chunk into two smaller chunks expandleftrule will expand a chunk to incorporate new unchunked material on the left expandrightrule will expand a chunk to incorporate new unchunked material on the right tag patterns a regexpchunkrule uses a modified version of regular expression patterns called tag patterns tag patterns are used to match sequences of tags examples of tag patterns are r dt jj nn r nn r nn the differences between regular expression patterns and tag patterns are in tag patterns and act as parentheses so nn matches one or more repetitions of nn not nn followed by one or more repetitions of whitespace in tag patterns is ignored so dt nn is equivalent to dt nn in tag patterns is equivalent to so nn matches any single tag starting with nn the function tag_pattern2re_pattern can be used to transform a tag pattern to an equivalent regular expression pattern efficiency preliminary tests indicate that regexpchunkparser can chunk at a rate of about 300 tokens second with a moderately complex rule set there may be problems if regexpchunkparser is used with more than 5 000 tokens at a time in particular evaluation of some regular expressions may cause the python regular expression engine to exceed its maximum recursion depth we have attempted to minimize these problems but it is impossible to avoid them completely we therefore recommend that you apply the chunk parser to a single sentence at a time emacs tip if you evaluate the following elisp expression in emacs it will colorize a chunkstring when you use an interactive python shell with emacs or xemacs c c let defconst comint mode font lock keywords 0 font lock reference face 0 font lock function name face add hook comint mode hook lambda turn on font lock you can evaluate this code by copying it to a temporary buffer placing the cursor after the last close parenthesis and typing c x c e you should evaluate it before running the interactive session the change will last until you close emacs unresolved issues if we use the re module for regular expressions python s regular expression engine generates maximum recursion depth exceeded errors when processing very large texts even for regular expressions that should not require any recursion we therefore use the pre module instead but note that pre does not include unicode support so this module will not work with unicode strings note also that pre regular expressions are not quite as advanced as re ones e g no leftward zero length assertions type chunk_tag_pattern regexp var chunk_tag_pattern a regular expression to test whether a tag pattern is valid standard treebank pos tagger use nltk s currently recommended named entity chunker to chunk the given list of tagged tokens use nltk s currently recommended named entity chunker to chunk the given list of tagged sentences each consisting of a list of tagged tokens
from nltk.chunk.api import ChunkParserI from nltk.chunk.regexp import RegexpChunkParser, RegexpParser from nltk.chunk.util import ( ChunkScore, accuracy, conllstr2tree, conlltags2tree, ieerstr2tree, tagstr2tree, tree2conllstr, tree2conlltags, ) from nltk.data import load _BINARY_NE_CHUNKER = "chunkers/maxent_ne_chunker/english_ace_binary.pickle" _MULTICLASS_NE_CHUNKER = "chunkers/maxent_ne_chunker/english_ace_multiclass.pickle" def ne_chunk(tagged_tokens, binary=False): if binary: chunker_pickle = _BINARY_NE_CHUNKER else: chunker_pickle = _MULTICLASS_NE_CHUNKER chunker = load(chunker_pickle) return chunker.parse(tagged_tokens) def ne_chunk_sents(tagged_sentences, binary=False): if binary: chunker_pickle = _BINARY_NE_CHUNKER else: chunker_pickle = _MULTICLASS_NE_CHUNKER chunker = load(chunker_pickle) return chunker.parse_sents(tagged_sentences)
natural language toolkit chunk parsing api c 20012023 nltk project edward loper edlopergmail com steven bird stevenbird1gmail com minor additions url https www nltk org for license information see license txt chunk parser interface a processing interface for identifying nonoverlapping groups in unrestricted text typically chunk parsers are used to find base syntactic constituents such as base noun phrases unlike parseri chunkparseri guarantees that the parse method will always generate a parse return the best chunk structure for the given tokens and return a tree param tokens the list of word tag tokens to be chunked type tokens listtuple rtype tree score the accuracy of the chunker against the gold standard remove the chunking the gold standard text rechunk it using the chunker and return a chunkscore object reflecting the performance of this chunk parser type gold listtree param gold the list of chunked sentences to score the chunker on rtype chunkscore natural language toolkit chunk parsing api c 2001 2023 nltk project edward loper edloper gmail com steven bird stevenbird1 gmail com minor additions url https www nltk org for license information see license txt chunk parser interface a processing interface for identifying non overlapping groups in unrestricted text typically chunk parsers are used to find base syntactic constituents such as base noun phrases unlike parseri chunkparseri guarantees that the parse method will always generate a parse return the best chunk structure for the given tokens and return a tree param tokens the list of word tag tokens to be chunked type tokens list tuple rtype tree score the accuracy of the chunker against the gold standard remove the chunking the gold standard text rechunk it using the chunker and return a chunkscore object reflecting the performance of this chunk parser type gold list tree param gold the list of chunked sentences to score the chunker on rtype chunkscore
from nltk.chunk.util import ChunkScore from nltk.internals import deprecated from nltk.parse import ParserI class ChunkParserI(ParserI): def parse(self, tokens): raise NotImplementedError() @deprecated("Use accuracy(gold) instead.") def evaluate(self, gold): return self.accuracy(gold) def accuracy(self, gold): chunkscore = ChunkScore() for correct in gold: chunkscore.score(correct, self.parse(correct.leaves())) return chunkscore
natural language toolkit chunk parsing api c 20012023 nltk project edward loper edlopergmail com url https www nltk org for license information see license txt named entity chunker the iob tagger used by the chunk parser 89 6 expected input list of postagged words each token should be a postagged word convert to tagged sequence convert a list of tagged tokens to a chunkparse tree convert a chunkparse tree to a list of tagged tokens partofspeech tagging read the xml file and get a list of entities read the text file and mark the entities strip xml tags since they don t count towards the indices blank out anything beforeafter text simplify quotes binary distinction ne or not ne multiclass distinction ne type this probably belongs in a more generalpurpose location as does the parsetotagged function make sure that the pickled object has the right class name natural language toolkit chunk parsing api c 2001 2023 nltk project edward loper edloper gmail com url https www nltk org for license information see license txt named entity chunker the iob tagger used by the chunk parser 89 6 expected input list of pos tagged words each token should be a pos tagged word convert to tagged sequence convert a list of tagged tokens to a chunk parse tree convert a chunk parse tree to a list of tagged tokens part of speech tagging read the xml file and get a list of entities only nes read the text file and mark the entities strip xml tags since they don t count towards the indices blank out anything before after text simplify quotes binary distinction ne or not ne overlapping deal with this better multiclass distinction ne type overlapping deal with this better this probably belongs in a more general purpose location as does the parse_to_tagged function make sure that the pickled object has the right class name
import os import pickle import re from xml.etree import ElementTree as ET from nltk.tag import ClassifierBasedTagger, pos_tag try: from nltk.classify import MaxentClassifier except ImportError: pass from nltk.chunk.api import ChunkParserI from nltk.chunk.util import ChunkScore from nltk.data import find from nltk.tokenize import word_tokenize from nltk.tree import Tree class NEChunkParserTagger(ClassifierBasedTagger): def __init__(self, train): ClassifierBasedTagger.__init__( self, train=train, classifier_builder=self._classifier_builder ) def _classifier_builder(self, train): return MaxentClassifier.train( train, algorithm="megam", gaussian_prior_sigma=1, trace=2 ) def _english_wordlist(self): try: wl = self._en_wordlist except AttributeError: from nltk.corpus import words self._en_wordlist = set(words.words("en-basic")) wl = self._en_wordlist return wl def _feature_detector(self, tokens, index, history): word = tokens[index][0] pos = simplify_pos(tokens[index][1]) if index == 0: prevword = prevprevword = None prevpos = prevprevpos = None prevshape = prevtag = prevprevtag = None elif index == 1: prevword = tokens[index - 1][0].lower() prevprevword = None prevpos = simplify_pos(tokens[index - 1][1]) prevprevpos = None prevtag = history[index - 1][0] prevshape = prevprevtag = None else: prevword = tokens[index - 1][0].lower() prevprevword = tokens[index - 2][0].lower() prevpos = simplify_pos(tokens[index - 1][1]) prevprevpos = simplify_pos(tokens[index - 2][1]) prevtag = history[index - 1] prevprevtag = history[index - 2] prevshape = shape(prevword) if index == len(tokens) - 1: nextword = nextnextword = None nextpos = nextnextpos = None elif index == len(tokens) - 2: nextword = tokens[index + 1][0].lower() nextpos = tokens[index + 1][1].lower() nextnextword = None nextnextpos = None else: nextword = tokens[index + 1][0].lower() nextpos = tokens[index + 1][1].lower() nextnextword = tokens[index + 2][0].lower() nextnextpos = tokens[index + 2][1].lower() features = { "bias": True, "shape": shape(word), "wordlen": len(word), "prefix3": word[:3].lower(), "suffix3": word[-3:].lower(), "pos": pos, "word": word, "en-wordlist": (word in self._english_wordlist()), "prevtag": prevtag, "prevpos": prevpos, "nextpos": nextpos, "prevword": prevword, "nextword": nextword, "word+nextpos": f"{word.lower()}+{nextpos}", "pos+prevtag": f"{pos}+{prevtag}", "shape+prevtag": f"{prevshape}+{prevtag}", } return features class NEChunkParser(ChunkParserI): def __init__(self, train): self._train(train) def parse(self, tokens): tagged = self._tagger.tag(tokens) tree = self._tagged_to_parse(tagged) return tree def _train(self, corpus): corpus = [self._parse_to_tagged(s) for s in corpus] self._tagger = NEChunkParserTagger(train=corpus) def _tagged_to_parse(self, tagged_tokens): sent = Tree("S", []) for (tok, tag) in tagged_tokens: if tag == "O": sent.append(tok) elif tag.startswith("B-"): sent.append(Tree(tag[2:], [tok])) elif tag.startswith("I-"): if sent and isinstance(sent[-1], Tree) and sent[-1].label() == tag[2:]: sent[-1].append(tok) else: sent.append(Tree(tag[2:], [tok])) return sent @staticmethod def _parse_to_tagged(sent): toks = [] for child in sent: if isinstance(child, Tree): if len(child) == 0: print("Warning -- empty chunk in sentence") continue toks.append((child[0], f"B-{child.label()}")) for tok in child[1:]: toks.append((tok, f"I-{child.label()}")) else: toks.append((child, "O")) return toks def shape(word): if re.match(r"[0-9]+(\.[0-9]*)?|[0-9]*\.[0-9]+$", word, re.UNICODE): return "number" elif re.match(r"\W+$", word, re.UNICODE): return "punct" elif re.match(r"\w+$", word, re.UNICODE): if word.istitle(): return "upcase" elif word.islower(): return "downcase" else: return "mixedcase" else: return "other" def simplify_pos(s): if s.startswith("V"): return "V" else: return s.split("-")[0] def postag_tree(tree): words = tree.leaves() tag_iter = (pos for (word, pos) in pos_tag(words)) newtree = Tree("S", []) for child in tree: if isinstance(child, Tree): newtree.append(Tree(child.label(), [])) for subchild in child: newtree[-1].append((subchild, next(tag_iter))) else: newtree.append((child, next(tag_iter))) return newtree def load_ace_data(roots, fmt="binary", skip_bnews=True): for root in roots: for root, dirs, files in os.walk(root): if root.endswith("bnews") and skip_bnews: continue for f in files: if f.endswith(".sgm"): yield from load_ace_file(os.path.join(root, f), fmt) def load_ace_file(textfile, fmt): print(f" - {os.path.split(textfile)[1]}") annfile = textfile + ".tmx.rdc.xml" entities = [] with open(annfile) as infile: xml = ET.parse(infile).getroot() for entity in xml.findall("document/entity"): typ = entity.find("entity_type").text for mention in entity.findall("entity_mention"): if mention.get("TYPE") != "NAME": continue s = int(mention.find("head/charseq/start").text) e = int(mention.find("head/charseq/end").text) + 1 entities.append((s, e, typ)) with open(textfile) as infile: text = infile.read() text = re.sub("<(?!/?TEXT)[^>]+>", "", text) def subfunc(m): return " " * (m.end() - m.start() - 6) text = re.sub(r"[\s\S]*<TEXT>", subfunc, text) text = re.sub(r"</TEXT>[\s\S]*", "", text) text = re.sub("``", ' "', text) text = re.sub("''", '" ', text) entity_types = {typ for (s, e, typ) in entities} if fmt == "binary": i = 0 toks = Tree("S", []) for (s, e, typ) in sorted(entities): if s < i: s = i if e <= s: continue toks.extend(word_tokenize(text[i:s])) toks.append(Tree("NE", text[s:e].split())) i = e toks.extend(word_tokenize(text[i:])) yield toks elif fmt == "multiclass": i = 0 toks = Tree("S", []) for (s, e, typ) in sorted(entities): if s < i: s = i if e <= s: continue toks.extend(word_tokenize(text[i:s])) toks.append(Tree(typ, text[s:e].split())) i = e toks.extend(word_tokenize(text[i:])) yield toks else: raise ValueError("bad fmt value") def cmp_chunks(correct, guessed): correct = NEChunkParser._parse_to_tagged(correct) guessed = NEChunkParser._parse_to_tagged(guessed) ellipsis = False for (w, ct), (w, gt) in zip(correct, guessed): if ct == gt == "O": if not ellipsis: print(f" {ct:15} {gt:15} {w}") print(" {:15} {:15} {2}".format("...", "...", "...")) ellipsis = True else: ellipsis = False print(f" {ct:15} {gt:15} {w}") def build_model(fmt="binary"): print("Loading training data...") train_paths = [ find("corpora/ace_data/ace.dev"), find("corpora/ace_data/ace.heldout"), find("corpora/ace_data/bbn.dev"), find("corpora/ace_data/muc.dev"), ] train_trees = load_ace_data(train_paths, fmt) train_data = [postag_tree(t) for t in train_trees] print("Training...") cp = NEChunkParser(train_data) del train_data print("Loading eval data...") eval_paths = [find("corpora/ace_data/ace.eval")] eval_trees = load_ace_data(eval_paths, fmt) eval_data = [postag_tree(t) for t in eval_trees] print("Evaluating...") chunkscore = ChunkScore() for i, correct in enumerate(eval_data): guess = cp.parse(correct.leaves()) chunkscore.score(correct, guess) if i < 3: cmp_chunks(correct, guess) print(chunkscore) outfilename = f"/tmp/ne_chunker_{fmt}.pickle" print(f"Saving chunker to {outfilename}...") with open(outfilename, "wb") as outfile: pickle.dump(cp, outfile, -1) return cp if __name__ == "__main__": from nltk.chunk.named_entity import build_model build_model("binary") build_model("multiclass")
natural language toolkit classifiers c 20012023 nltk project edward loper edlopergmail com url https www nltk org for license information see license txt classes and interfaces for labeling tokens with category labels or class labels typically labels are represented with strings such as health or sports classifiers can be used to perform a wide range of classification tasks for example classifiers can be used to classify documents by topic to classify ambiguous words by which word sense is intended to classify acoustic signals by which phoneme they represent to classify sentences by their features in order to decide which category label is appropriate for a given token classifiers examine one or more features of the token these features are typically chosen by hand and indicate which aspects of the token are relevant to the classification decision for example a document classifier might use a separate feature for each word recording how often that word occurred in the document featuresets the features describing a token are encoded using a featureset which is a dictionary that maps from feature names to feature values feature names are unique strings that indicate what aspect of the token is encoded by the feature examples include prevword for a feature whose value is the previous word and containswordlibrary for a feature that is true when a document contains the word library feature values are typically booleans numbers or strings depending on which feature they describe featuresets are typically constructed using a feature detector also known as a feature extractor a feature detector is a function that takes a token and sometimes information about its context as its input and returns a featureset describing that token for example the following feature detector converts a document stored as a list of words to a featureset describing the set of words included in the document define a feature detector function def documentfeaturesdocument return dict containswords w true for w in document feature detectors are typically applied to each token before it is fed to the classifier classify each gutenberg document from nltk corpus import gutenberg for fileid in gutenberg fileids doctest skip doc gutenberg wordsfileid doctest skip printfileid classifier classifydocumentfeaturesdoc doctest skip the parameters that a feature detector expects will vary depending on the task and the needs of the feature detector for example a feature detector for word sense disambiguation wsd might take as its input a sentence and the index of a word that should be classified and return a featureset for that word the following feature detector for wsd includes features describing the left and right contexts of the target word def wsdfeaturessentence index featureset for i in rangemax0 index3 index featureset leftcontexts sentencei true for i in rangeindex maxindex3 lensentence featureset rightcontexts sentencei true return featureset training classifiers most classifiers are built by training them on a list of handlabeled examples known as the training set training sets are represented as lists of featuredict label tuples natural language toolkit classifiers c 2001 2023 nltk project edward loper edloper gmail com url https www nltk org for license information see license txt classes and interfaces for labeling tokens with category labels or class labels typically labels are represented with strings such as health or sports classifiers can be used to perform a wide range of classification tasks for example classifiers can be used to classify documents by topic to classify ambiguous words by which word sense is intended to classify acoustic signals by which phoneme they represent to classify sentences by their features in order to decide which category label is appropriate for a given token classifiers examine one or more features of the token these features are typically chosen by hand and indicate which aspects of the token are relevant to the classification decision for example a document classifier might use a separate feature for each word recording how often that word occurred in the document featuresets the features describing a token are encoded using a featureset which is a dictionary that maps from feature names to feature values feature names are unique strings that indicate what aspect of the token is encoded by the feature examples include prevword for a feature whose value is the previous word and contains word library for a feature that is true when a document contains the word library feature values are typically booleans numbers or strings depending on which feature they describe featuresets are typically constructed using a feature detector also known as a feature extractor a feature detector is a function that takes a token and sometimes information about its context as its input and returns a featureset describing that token for example the following feature detector converts a document stored as a list of words to a featureset describing the set of words included in the document define a feature detector function def document_features document return dict contains word s w true for w in document feature detectors are typically applied to each token before it is fed to the classifier classify each gutenberg document from nltk corpus import gutenberg for fileid in gutenberg fileids doctest skip doc gutenberg words fileid doctest skip print fileid classifier classify document_features doc doctest skip the parameters that a feature detector expects will vary depending on the task and the needs of the feature detector for example a feature detector for word sense disambiguation wsd might take as its input a sentence and the index of a word that should be classified and return a featureset for that word the following feature detector for wsd includes features describing the left and right contexts of the target word def wsd_features sentence index featureset for i in range max 0 index 3 index featureset left context s sentence i true for i in range index max index 3 len sentence featureset right context s sentence i true return featureset training classifiers most classifiers are built by training them on a list of hand labeled examples known as the training set training sets are represented as lists of featuredict label tuples
from nltk.classify.api import ClassifierI, MultiClassifierI from nltk.classify.decisiontree import DecisionTreeClassifier from nltk.classify.maxent import ( BinaryMaxentFeatureEncoding, ConditionalExponentialClassifier, MaxentClassifier, TypedMaxentFeatureEncoding, ) from nltk.classify.megam import call_megam, config_megam from nltk.classify.naivebayes import NaiveBayesClassifier from nltk.classify.positivenaivebayes import PositiveNaiveBayesClassifier from nltk.classify.rte_classify import RTEFeatureExtractor, rte_classifier, rte_features from nltk.classify.scikitlearn import SklearnClassifier from nltk.classify.senna import Senna from nltk.classify.textcat import TextCat from nltk.classify.util import accuracy, apply_features, log_likelihood from nltk.classify.weka import WekaClassifier, config_weka
natural language toolkit classifier interface c 20012023 nltk project edward loper edlopergmail com steven bird stevenbird1gmail com minor additions url https www nltk org for license information see license txt interfaces for labeling tokens with category labels or class labels classifieri is a standard interface for singlecategory classification in which the set of categories is known the number of categories is finite and each text belongs to exactly one category multiclassifieri is a standard interface for multicategory classification which is like singlecategory classification except that each text belongs to zero or more categories classification interfaces a processing interface for labeling tokens with a single category label or class labels are typically strs or ints but can be any immutable type the set of labels that the classifier chooses from must be fixed and finite subclasses must define labels either classify or classifymany or both subclasses may define either probclassify or probclassifymany or both return the list of category labels used by this classifier rtype list of immutable return the most appropriate label for the given featureset rtype label return a probability distribution over labels for the given featureset rtype probdisti apply self classify to each element of featuresets i e return self classifyfs for fs in featuresets rtype listlabel apply self probclassify to each element of featuresets i e return self probclassifyfs for fs in featuresets rtype listprobdisti a processing interface for labeling tokens with zero or more category labels or labels labels are typically strs or ints but can be any immutable type the set of labels that the multiclassifier chooses from must be fixed and finite subclasses must define labels either classify or classifymany or both subclasses may define either probclassify or probclassifymany or both return the list of category labels used by this classifier rtype list of immutable return the most appropriate set of labels for the given featureset rtype setlabel return a probability distribution over sets of labels for the given featureset rtype probdisti apply self classify to each element of featuresets i e return self classifyfs for fs in featuresets rtype listsetlabel apply self probclassify to each element of featuresets i e return self probclassifyfs for fs in featuresets rtype listprobdisti xx in progress class sequenceclassifieri a processing interface for labeling sequences of tokens with a single category label or class labels are typically strs or ints but can be any immutable type the set of labels that the classifier chooses from must be fixed and finite def labelsself return the list of category labels used by this classifier rtype list of immutable raise notimplementederror def probclassifyself featureset return a probability distribution over labels for the given featureset if featureset is a list of featuresets then return a corresponding list containing the probability distribution over labels for each of the given featuresets where the i th element of this list is the most appropriate label for the i th element of featuresets raise notimplementederror def classifyself featureset return the most appropriate label for the given featureset if featureset is a list of featuresets then return a corresponding list containing the most appropriate label for each of the given featuresets where the i th element of this list is the most appropriate label for the i th element of featuresets raise notimplementederror natural language toolkit classifier interface c 2001 2023 nltk project edward loper edloper gmail com steven bird stevenbird1 gmail com minor additions url https www nltk org for license information see license txt interfaces for labeling tokens with category labels or class labels classifieri is a standard interface for single category classification in which the set of categories is known the number of categories is finite and each text belongs to exactly one category multiclassifieri is a standard interface for multi category classification which is like single category classification except that each text belongs to zero or more categories classification interfaces a processing interface for labeling tokens with a single category label or class labels are typically strs or ints but can be any immutable type the set of labels that the classifier chooses from must be fixed and finite subclasses must define labels either classify or classify_many or both subclasses may define either prob_classify or prob_classify_many or both return the list of category labels used by this classifier rtype list of immutable return the most appropriate label for the given featureset rtype label return a probability distribution over labels for the given featureset rtype probdisti apply self classify to each element of featuresets i e return self classify fs for fs in featuresets rtype list label apply self prob_classify to each element of featuresets i e return self prob_classify fs for fs in featuresets rtype list probdisti a processing interface for labeling tokens with zero or more category labels or labels labels are typically strs or ints but can be any immutable type the set of labels that the multi classifier chooses from must be fixed and finite subclasses must define labels either classify or classify_many or both subclasses may define either prob_classify or prob_classify_many or both return the list of category labels used by this classifier rtype list of immutable return the most appropriate set of labels for the given featureset rtype set label return a probability distribution over sets of labels for the given featureset rtype probdisti apply self classify to each element of featuresets i e return self classify fs for fs in featuresets rtype list set label apply self prob_classify to each element of featuresets i e return self prob_classify fs for fs in featuresets rtype list probdisti xx in progress class sequenceclassifieri a processing interface for labeling sequences of tokens with a single category label or class labels are typically strs or ints but can be any immutable type the set of labels that the classifier chooses from must be fixed and finite def labels self return the list of category labels used by this classifier rtype list of immutable raise notimplementederror def prob_classify self featureset return a probability distribution over labels for the given featureset if featureset is a list of featuresets then return a corresponding list containing the probability distribution over labels for each of the given featuresets where the i th element of this list is the most appropriate label for the i th element of featuresets raise notimplementederror def classify self featureset return the most appropriate label for the given featureset if featureset is a list of featuresets then return a corresponding list containing the most appropriate label for each of the given featuresets where the i th element of this list is the most appropriate label for the i th element of featuresets raise notimplementederror
from nltk.internals import overridden class ClassifierI: def labels(self): raise NotImplementedError() def classify(self, featureset): if overridden(self.classify_many): return self.classify_many([featureset])[0] else: raise NotImplementedError() def prob_classify(self, featureset): if overridden(self.prob_classify_many): return self.prob_classify_many([featureset])[0] else: raise NotImplementedError() def classify_many(self, featuresets): return [self.classify(fs) for fs in featuresets] def prob_classify_many(self, featuresets): return [self.prob_classify(fs) for fs in featuresets] class MultiClassifierI: def labels(self): raise NotImplementedError() def classify(self, featureset): if overridden(self.classify_many): return self.classify_many([featureset])[0] else: raise NotImplementedError() def prob_classify(self, featureset): if overridden(self.prob_classify_many): return self.prob_classify_many([featureset])[0] else: raise NotImplementedError() def classify_many(self, featuresets): return [self.classify(fs) for fs in featuresets] def prob_classify_many(self, featuresets): return [self.prob_classify(fs) for fs in featuresets]
natural language toolkit decision tree classifiers c 20012023 nltk project edward loper edlopergmail com url https www nltk org for license information see license txt a classifier model that decides which label to assign to a token on the basis of a tree structure where branches correspond to conditions on feature values and leaves correspond to label assignments param label the most likely label for tokens that reach this node in the decision tree if this decision tree has no children then this label will be assigned to any token that reaches this decision tree param featurename the name of the feature that this decision tree selects for param decisions a dictionary mapping from feature values for the feature identified by featurename to child decision trees param default the child that will be used if the value of feature featurename does not match any of the keys in decisions this is used when constructing binary decision trees decision leaf decision tree return a string containing a prettyprinted version of this decision tree each line in this string corresponds to a single decision tree node or leaf and indentation is used to display the structure of the decision tree xx display default return a string representation of this decision tree that expresses the decisions it makes as a nested set of pseudocode if statements param binary if true then treat all featurevalue pairs as individual binary features rather than using a single nway branch for each feature collect a list of all feature names collect a list of the values each feature can take start with a stump refine the stump return it find the best label for each value find the best label for each value but hopefully we have observations demo natural language toolkit decision tree classifiers c 2001 2023 nltk project edward loper edloper gmail com url https www nltk org for license information see license txt a classifier model that decides which label to assign to a token on the basis of a tree structure where branches correspond to conditions on feature values and leaves correspond to label assignments param label the most likely label for tokens that reach this node in the decision tree if this decision tree has no children then this label will be assigned to any token that reaches this decision tree param feature_name the name of the feature that this decision tree selects for param decisions a dictionary mapping from feature values for the feature identified by feature_name to child decision trees param default the child that will be used if the value of feature feature_name does not match any of the keys in decisions this is used when constructing binary decision trees decision leaf decision tree return a string containing a pretty printed version of this decision tree each line in this string corresponds to a single decision tree node or leaf and indentation is used to display the structure of the decision tree xx display default return a string representation of this decision tree that expresses the decisions it makes as a nested set of pseudocode if statements param binary if true then treat all feature value pairs as individual binary features rather than using a single n way branch for each feature collect a list of all feature names collect a list of the values each feature can take start with a stump refine the stump return it find the best label for each value freq label value find the best label for each value but hopefully we have observations demo decisiontreeclassifier train
from collections import defaultdict from nltk.classify.api import ClassifierI from nltk.probability import FreqDist, MLEProbDist, entropy class DecisionTreeClassifier(ClassifierI): def __init__(self, label, feature_name=None, decisions=None, default=None): self._label = label self._fname = feature_name self._decisions = decisions self._default = default def labels(self): labels = [self._label] if self._decisions is not None: for dt in self._decisions.values(): labels.extend(dt.labels()) if self._default is not None: labels.extend(self._default.labels()) return list(set(labels)) def classify(self, featureset): if self._fname is None: return self._label fval = featureset.get(self._fname) if fval in self._decisions: return self._decisions[fval].classify(featureset) elif self._default is not None: return self._default.classify(featureset) else: return self._label def error(self, labeled_featuresets): errors = 0 for featureset, label in labeled_featuresets: if self.classify(featureset) != label: errors += 1 return errors / len(labeled_featuresets) def pretty_format(self, width=70, prefix="", depth=4): if self._fname is None: n = width - len(prefix) - 15 return "{}{} {}\n".format(prefix, "." * n, self._label) s = "" for i, (fval, result) in enumerate( sorted( self._decisions.items(), key=lambda item: (item[0] in [None, False, True], str(item[0]).lower()), ) ): hdr = f"{prefix}{self._fname}={fval}? " n = width - 15 - len(hdr) s += "{}{} {}\n".format(hdr, "." * (n), result._label) if result._fname is not None and depth > 1: s += result.pretty_format(width, prefix + " ", depth - 1) if self._default is not None: n = width - len(prefix) - 21 s += "{}else: {} {}\n".format(prefix, "." * n, self._default._label) if self._default._fname is not None and depth > 1: s += self._default.pretty_format(width, prefix + " ", depth - 1) return s def pseudocode(self, prefix="", depth=4): if self._fname is None: return f"{prefix}return {self._label!r}\n" s = "" for (fval, result) in sorted( self._decisions.items(), key=lambda item: (item[0] in [None, False, True], str(item[0]).lower()), ): s += f"{prefix}if {self._fname} == {fval!r}: " if result._fname is not None and depth > 1: s += "\n" + result.pseudocode(prefix + " ", depth - 1) else: s += f"return {result._label!r}\n" if self._default is not None: if len(self._decisions) == 1: s += "{}if {} != {!r}: ".format( prefix, self._fname, list(self._decisions.keys())[0] ) else: s += f"{prefix}else: " if self._default._fname is not None and depth > 1: s += "\n" + self._default.pseudocode(prefix + " ", depth - 1) else: s += f"return {self._default._label!r}\n" return s def __str__(self): return self.pretty_format() @staticmethod def train( labeled_featuresets, entropy_cutoff=0.05, depth_cutoff=100, support_cutoff=10, binary=False, feature_values=None, verbose=False, ): feature_names = set() for featureset, label in labeled_featuresets: for fname in featureset: feature_names.add(fname) if feature_values is None and binary: feature_values = defaultdict(set) for featureset, label in labeled_featuresets: for fname, fval in featureset.items(): feature_values[fname].add(fval) if not binary: tree = DecisionTreeClassifier.best_stump( feature_names, labeled_featuresets, verbose ) else: tree = DecisionTreeClassifier.best_binary_stump( feature_names, labeled_featuresets, feature_values, verbose ) tree.refine( labeled_featuresets, entropy_cutoff, depth_cutoff - 1, support_cutoff, binary, feature_values, verbose, ) return tree @staticmethod def leaf(labeled_featuresets): label = FreqDist(label for (featureset, label) in labeled_featuresets).max() return DecisionTreeClassifier(label) @staticmethod def stump(feature_name, labeled_featuresets): label = FreqDist(label for (featureset, label) in labeled_featuresets).max() freqs = defaultdict(FreqDist) for featureset, label in labeled_featuresets: feature_value = featureset.get(feature_name) freqs[feature_value][label] += 1 decisions = {val: DecisionTreeClassifier(freqs[val].max()) for val in freqs} return DecisionTreeClassifier(label, feature_name, decisions) def refine( self, labeled_featuresets, entropy_cutoff, depth_cutoff, support_cutoff, binary=False, feature_values=None, verbose=False, ): if len(labeled_featuresets) <= support_cutoff: return if self._fname is None: return if depth_cutoff <= 0: return for fval in self._decisions: fval_featuresets = [ (featureset, label) for (featureset, label) in labeled_featuresets if featureset.get(self._fname) == fval ] label_freqs = FreqDist(label for (featureset, label) in fval_featuresets) if entropy(MLEProbDist(label_freqs)) > entropy_cutoff: self._decisions[fval] = DecisionTreeClassifier.train( fval_featuresets, entropy_cutoff, depth_cutoff, support_cutoff, binary, feature_values, verbose, ) if self._default is not None: default_featuresets = [ (featureset, label) for (featureset, label) in labeled_featuresets if featureset.get(self._fname) not in self._decisions ] label_freqs = FreqDist(label for (featureset, label) in default_featuresets) if entropy(MLEProbDist(label_freqs)) > entropy_cutoff: self._default = DecisionTreeClassifier.train( default_featuresets, entropy_cutoff, depth_cutoff, support_cutoff, binary, feature_values, verbose, ) @staticmethod def best_stump(feature_names, labeled_featuresets, verbose=False): best_stump = DecisionTreeClassifier.leaf(labeled_featuresets) best_error = best_stump.error(labeled_featuresets) for fname in feature_names: stump = DecisionTreeClassifier.stump(fname, labeled_featuresets) stump_error = stump.error(labeled_featuresets) if stump_error < best_error: best_error = stump_error best_stump = stump if verbose: print( "best stump for {:6d} toks uses {:20} err={:6.4f}".format( len(labeled_featuresets), best_stump._fname, best_error ) ) return best_stump @staticmethod def binary_stump(feature_name, feature_value, labeled_featuresets): label = FreqDist(label for (featureset, label) in labeled_featuresets).max() pos_fdist = FreqDist() neg_fdist = FreqDist() for featureset, label in labeled_featuresets: if featureset.get(feature_name) == feature_value: pos_fdist[label] += 1 else: neg_fdist[label] += 1 decisions = {} default = label if pos_fdist.N() > 0: decisions = {feature_value: DecisionTreeClassifier(pos_fdist.max())} if neg_fdist.N() > 0: default = DecisionTreeClassifier(neg_fdist.max()) return DecisionTreeClassifier(label, feature_name, decisions, default) @staticmethod def best_binary_stump( feature_names, labeled_featuresets, feature_values, verbose=False ): best_stump = DecisionTreeClassifier.leaf(labeled_featuresets) best_error = best_stump.error(labeled_featuresets) for fname in feature_names: for fval in feature_values[fname]: stump = DecisionTreeClassifier.binary_stump( fname, fval, labeled_featuresets ) stump_error = stump.error(labeled_featuresets) if stump_error < best_error: best_error = stump_error best_stump = stump if verbose: if best_stump._decisions: descr = "{}={}".format( best_stump._fname, list(best_stump._decisions.keys())[0] ) else: descr = "(default)" print( "best stump for {:6d} toks uses {:20} err={:6.4f}".format( len(labeled_featuresets), descr, best_error ) ) return best_stump def f(x): return DecisionTreeClassifier.train(x, binary=True, verbose=True) def demo(): from nltk.classify.util import binary_names_demo_features, names_demo classifier = names_demo( f, binary_names_demo_features ) print(classifier.pretty_format(depth=7)) print(classifier.pseudocode(depth=7)) if __name__ == "__main__": demo()
natural language toolkit maximum entropy classifiers c 20012023 nltk project edward loper edlopergmail com dmitry chichkov dchichkovgmail com typedmaxentfeatureencoding url https www nltk org for license information see license txt a classifier model based on maximum entropy modeling framework this framework considers all of the probability distributions that are empirically consistent with the training data and chooses the distribution with the highest entropy a probability distribution is empirically consistent with a set of training data if its estimated frequency with which a class and a feature vector value cooccur is equal to the actual frequency in the data terminology feature the term feature is usually used to refer to some property of an unlabeled token for example when performing word sense disambiguation we might define a prevword feature whose value is the word preceding the target word however in the context of maxent modeling the term feature is typically used to refer to a property of a labeled token in order to prevent confusion we will introduce two distinct terms to disambiguate these two different concepts an inputfeature is a property of an unlabeled token a jointfeature is a property of a labeled token in the rest of the nltk classify module the term features is used to refer to what we will call inputfeatures in this module in literature that describes and discusses maximum entropy models inputfeatures are typically called contexts and jointfeatures are simply referred to as features converting inputfeatures to jointfeatures in maximum entropy models jointfeatures are required to have numeric values typically each inputfeature inputfeat is mapped to a set of jointfeatures of the form jointfeattoken label 1 if inputfeattoken featval and label somelabel 0 otherwise for all values of featval and somelabel this mapping is performed by classes that implement the maxentfeatureencodingi interface classifier model a maximum entropy classifier also known as a conditional exponential classifier this classifier is parameterized by a set of weights which are used to combine the jointfeatures that are generated from a featureset by an encoding in particular the encoding maps each featureset label pair to a vector the probability of each label is then computed using the following equation dotprodweights encodefs label probfslabel sumdotprodweights encodefs l for l in labels where dotprod is the dot product dotproda b sumxy for x y in zipa b construct a new maxent classifier model typically new classifier models are created using the train method type encoding maxentfeatureencodingi param encoding an encoding that is used to convert the featuresets that are given to the classify method into jointfeature vectors which are used by the maxent classifier model type weights list of float param weights the feature weight vector for this classifier type logarithmic bool param logarithmic if false then use nonlogarithmic weights self logarithmic false set the feature weight vector for this classifier param newweights the new feature weight vector type newweights list of float return the feature weight vector for this classifier rtype list of float normalize the dictionary to give a probability distribution print a table showing the effect of each of the features in the given feature set and how they combine to determine the probabilities of each label for that featureset generates the ranked list of informative features from most to least param show all neg or pos for negativeonly or positiveonly type show str param n the no of top features type n int use none the full list of ranked features a list of the algorithm names that are accepted for the train method s algorithm parameter train a new maxent classifier based on the given corpus of training samples this classifier will have its weights chosen to maximize entropy while remaining empirically consistent with the training corpus rtype maxentclassifier return the new maxent classifier type traintoks list param traintoks training data represented as a list of pairs the first member of which is a featureset and the second of which is a classification label type algorithm str param algorithm a caseinsensitive string specifying which algorithm should be used to train the classifier the following algorithms are currently available iterative scaling methods generalized iterative scaling gis improved iterative scaling iis external libraries requiring megam lmbfgs algorithm with training performed by megam megam the default algorithm is iis type trace int param trace the level of diagnostic tracing output to produce higher values produce more verbose output type encoding maxentfeatureencodingi param encoding a feature encoding used to convert featuresets into feature vectors if none is specified then a binarymaxentfeatureencoding will be built based on the features that are attested in the training corpus type labels liststr param labels the set of possible labels if none is given then the set of all labels attested in the training data will be used instead param gaussianpriorsigma the sigma value for a gaussian prior on model weights currently this is supported by megam for other algorithms its value is ignored param cutoffs arguments specifying various conditions under which the training should be halted some of the cutoff conditions are not supported by some algorithms maxiterv terminate after v iterations minllv terminate after the negative average loglikelihood drops under v minlldeltav terminate if a single iteration improves log likelihood by less than v alias for maxentclassifier feature encodings a mapping that converts a set of inputfeature values to a vector of jointfeature values given a label this conversion is necessary to translate featuresets into a format that can be used by maximum entropy models the set of jointfeatures used by a given encoding is fixed and each index in the generated jointfeature vectors corresponds to a single jointfeature the length of the generated jointfeature vectors is therefore constant for a given encoding because the jointfeature vectors generated by maxentfeatureencodingi are typically very sparse they are represented as a list of index value tuples specifying the value of each nonzero jointfeature feature encodings are generally created using the train method which generates an appropriate encoding based on the inputfeature values and labels that are present in a given corpus given a featureset label pair return the corresponding vector of jointfeature values this vector is represented as a list of index value tuples specifying the value of each nonzero jointfeature type featureset dict rtype listtupleint int return the size of the fixedlength jointfeature vectors that are generated by this encoding rtype int return a list of the known labels i e all labels l such that self encodefs l can be a nonzero jointfeature vector for some value of fs rtype list return a string describing the value of the jointfeature whose index in the generated feature vectors is fid rtype str construct and return new feature encoding based on a given training corpus traintoks type traintoks listtupledict str param traintoks training data represented as a list of pairs the first member of which is a feature dictionary and the second of which is a classification label a feature encoding that calls a usersupplied function to map a given featuresetlabel pair to a sparse jointfeature vector construct a new feature encoding based on the given function type func callable param func a function that takes two arguments a featureset and a label and returns the sparse joint feature vector that encodes them funcfeatureset label featurevector this sparse joint feature vector featurevector is a list of index value tuples type length int param length the size of the fixedlength jointfeature vectors that are generated by this encoding type labels list param labels a list of the known labels for this encoding i e all labels l such that self encodefs l can be a nonzero jointfeature vector for some value of fs a feature encoding that generates vectors containing a binary jointfeatures of the form jointfeatfs l 1 if fsfname fval and l label 0 otherwise where fname is the name of an inputfeature fval is a value for that inputfeature and label is a label typically these features are constructed based on a training corpus using the train method this method will create one feature for each combination of fname fval and label that occurs at least once in the training corpus the unseenfeatures parameter can be used to add unseenvalue features which are used whenever an input feature has a value that was not encountered in the training corpus these features have the form jointfeatfs l 1 if isunseenfname fsfname and l label 0 otherwise where isunseenfname fval is true if the encoding does not contain any joint features that are true when fsfnamefval the alwaysonfeatures parameter can be used to add alwayson features which have the form jointfeatfs l 1 if l label 0 otherwise these alwayson features allow the maxent model to directly model the prior probabilities of each label param labels a list of the known labels for this encoding param mapping a dictionary mapping from fname fval label tuples to corresponding jointfeature indexes these indexes must be the set of integers from 0 lenmapping if mappingfname fval labelid then self encode fname fval labelid is 1 otherwise it is 0 param unseenfeatures if true then include unseen value features in the generated jointfeature vectors param alwaysonfeatures if true then include alwayson features in the generated jointfeature vectors a list of attested labels self mapping mapping the length of generated joint feature vectors self alwayson none dict mapping from fname fid if alwaysonfeatures self alwayson label i self length for i label in enumeratelabels self length lenself alwayson if unseenfeatures fnames fname for fname fval label in mapping self unseen fname i self length for i fname in enumeratefnames self length lenfnames def encodeself featureset label inherit docs encoding convert inputfeatures to jointfeatures for fname fval in featureset items known feature name value if fname fval label in self mapping encoding appendself mappingfname fval label 1 otherwise we might want to fire an unseenvalue feature elif self unseen have we seen this fnamefval combination with any label for label2 in self labels if fname fval label2 in self mapping break we ve seen this fnamefval combo we haven t fire the unseenvalue feature else if fname in self unseen encoding appendself unseenfname 1 add alwayson features if self alwayson and label in self alwayson encoding appendself alwaysonlabel 1 return encoding def describeself fid inherit docs if not isinstancefid int raise typeerrordescribe expected an int try self invmapping except attributeerror self invmapping 1 lenself mapping for info i in self mapping items self invmappingi info if fid lenself mapping fname fval label self invmappingfid return ffnamefval r and label is label r elif self alwayson and fid in self alwayson values for label fid2 in self alwayson items if fid fid2 return label is r label elif self unseen and fid in self unseen values for fname fid2 in self unseen items if fid fid2 return s is unseen fname else raise valueerrorbad feature id def labelsself inherit docs return self labels def lengthself inherit docs return self length classmethod def traincls traintoks countcutoff0 labelsnone options mapping maps fname fval label fid seenlabels set the set of labels we ve encountered count defaultdictint maps fname fval count for tok label in traintoks if labels and label not in labels raise valueerrorunexpected label s label seenlabels addlabel record each of the features for fname fval in tok items if a count cutoff is given then only add a joint feature once the corresponding fname fval label tuple exceeds that cutoff countfname fval 1 if countfname fval countcutoff if fname fval label not in mapping mappingfname fval label lenmapping if labels is none labels seenlabels return clslabels mapping options class gisencodingbinarymaxentfeatureencoding def init self labels mapping unseenfeaturesfalse alwaysonfeaturesfalse cnone binarymaxentfeatureencoding init self labels mapping unseenfeatures alwaysonfeatures if c is none c lenfname for fname fval label in mapping 1 self c c property def cself get the basic encoding add a correction feature return the result this gets read twice so compute the values in case it s lazy a feature encoding that generates vectors containing integer float and binary jointfeatures of the form binary for string and boolean features jointfeatfs l 1 if fsfname fval and l label 0 otherwise value for integer and float features jointfeatfs l fval if fsfname typefval and l label not encoded otherwise where fname is the name of an inputfeature fval is a value for that inputfeature and label is a label typically these features are constructed based on a training corpus using the train method for string and boolean features typefval not in int float this method will create one feature for each combination of fname fval and label that occurs at least once in the training corpus for integer and float features typefval in int float this method will create one feature for each combination of fname and label that occurs at least once in the training corpus for binary features the unseenfeatures parameter can be used to add unseenvalue features which are used whenever an input feature has a value that was not encountered in the training corpus these features have the form jointfeatfs l 1 if isunseenfname fsfname and l label 0 otherwise where isunseenfname fval is true if the encoding does not contain any joint features that are true when fsfnamefval the alwaysonfeatures parameter can be used to add alwayson features which have the form jointfeatfs l 1 if l label 0 otherwise these alwayson features allow the maxent model to directly model the prior probabilities of each label param labels a list of the known labels for this encoding param mapping a dictionary mapping from fname fval label tuples to corresponding jointfeature indexes these indexes must be the set of integers from 0 lenmapping if mappingfname fval labelid then self encode fname fval labelid is 1 otherwise it is 0 param unseenfeatures if true then include unseen value features in the generated jointfeature vectors param alwaysonfeatures if true then include alwayson features in the generated jointfeature vectors a list of attested labels self mapping mapping the length of generated joint feature vectors self alwayson none dict mapping from fname fid if alwaysonfeatures self alwayson label i self length for i label in enumeratelabels self length lenself alwayson if unseenfeatures fnames fname for fname fval label in mapping self unseen fname i self length for i fname in enumeratefnames self length lenfnames def encodeself featureset label inherit docs encoding convert inputfeatures to jointfeatures for fname fval in featureset items if isinstancefval int float known feature name value if fname typefval label in self mapping encoding appendself mappingfname typefval label fval else known feature name value if fname fval label in self mapping encoding appendself mappingfname fval label 1 otherwise we might want to fire an unseenvalue feature elif self unseen have we seen this fnamefval combination with any label for label2 in self labels if fname fval label2 in self mapping break we ve seen this fnamefval combo we haven t fire the unseenvalue feature else if fname in self unseen encoding appendself unseenfname 1 add alwayson features if self alwayson and label in self alwayson encoding appendself alwaysonlabel 1 return encoding def describeself fid inherit docs if not isinstancefid int raise typeerrordescribe expected an int try self invmapping except attributeerror self invmapping 1 lenself mapping for info i in self mapping items self invmappingi info if fid lenself mapping fname fval label self invmappingfid return ffnamefval r and label is label r elif self alwayson and fid in self alwayson values for label fid2 in self alwayson items if fid fid2 return label is r label elif self unseen and fid in self unseen values for fname fid2 in self unseen items if fid fid2 return s is unseen fname else raise valueerrorbad feature id def labelsself inherit docs return self labels def lengthself inherit docs return self length classmethod def traincls traintoks countcutoff0 labelsnone options mapping maps fname fval label fid seenlabels set the set of labels we ve encountered count defaultdictint maps fname fval count for tok label in traintoks if labels and label not in labels raise valueerrorunexpected label s label seenlabels addlabel record each of the features for fname fval in tok items if typefval in int float fval typefval if a count cutoff is given then only add a joint feature once the corresponding fname fval label tuple exceeds that cutoff countfname fval 1 if countfname fval countcutoff if fname fval label not in mapping mappingfname fval label lenmapping if labels is none labels seenlabels return clslabels mapping options classifier trainer generalized iterative scaling def trainmaxentclassifierwithgis traintoks trace3 encodingnone labelsnone cutoffs cutoffs setdefaultmaxiter 100 cutoffchecker cutoffcheckercutoffs construct an encoding from the training data if encoding is none encoding gisencoding traintraintoks labelslabels if not hasattrencoding c raise typeerror the gis algorithm requires an encoding that defines c e g gisencoding cinv is the inverse of the sum of each joint feature vector this controls the learning rate higher cinv or lower c gives faster learning cinv 1 0 encoding c count how many times each feature occurs in the training data empiricalfcount calculateempiricalfcounttraintoks encoding check for any features that are not attested in traintoks unattested setnumpy nonzeroempiricalfcount 00 build the classifier start with weight0 for each attested feature and weightinfinity for each unattested feature weights numpy zeroslenempiricalfcount d for fid in unattested weightsfid numpy ninf classifier conditionalexponentialclassifierencoding weights take the log of the empirical fcount logempiricalfcount numpy log2empiricalfcount del empiricalfcount if trace 0 print training d iterations cutoffsmaxiter if trace 2 print print iteration log likelihood accuracy print train the classifier try while true if trace 2 ll cutoffchecker ll or loglikelihoodclassifier traintoks acc cutoffchecker acc or accuracyclassifier traintoks iternum cutoffchecker iter print 9d 14 5f 9 3f iternum ll acc use the model to estimate the number of times each feature should occur in the training data estimatedfcount calculateestimatedfcount classifier traintoks encoding take the log of estimated fcount avoid taking log0 for fid in unattested estimatedfcountfid 1 logestimatedfcount numpy log2estimatedfcount del estimatedfcount update the classifier weights weights classifier weights weights logempiricalfcount logestimatedfcount cinv classifier setweightsweights check the loglikelihood accuracy cutoffs if cutoffchecker checkclassifier traintoks break except keyboardinterrupt print training stopped keyboard interrupt except raise if trace 2 ll loglikelihoodclassifier traintoks acc accuracyclassifier traintoks printf final ll 14 5f acc 9 3f return the classifier return classifier def calculateempiricalfcounttraintoks encoding fcount numpy zerosencoding length d for tok label in traintoks for index val in encoding encodetok label fcountindex val return fcount def calculateestimatedfcountclassifier traintoks encoding fcount numpy zerosencoding length d for tok label in traintoks pdist classifier probclassifytok for label in pdist samples prob pdist problabel for fid fval in encoding encodetok label fcountfid prob fval return fcount classifier trainer improved iterative scaling def trainmaxentclassifierwithiis traintoks trace3 encodingnone labelsnone cutoffs cutoffs setdefaultmaxiter 100 cutoffchecker cutoffcheckercutoffs construct an encoding from the training data if encoding is none encoding binarymaxentfeatureencoding traintraintoks labelslabels count how many times each feature occurs in the training data empiricalffreq calculateempiricalfcounttraintoks encoding lentraintoks find the nf map and related variables nfarray and nfident nf is the sum of the features for a given labeled text nfmap compresses this sparse set of values to a dense list nfarray performs the reverse operation nfident is nfarray multiplied by an identity matrix nfmap calculatenfmaptraintoks encoding nfarray numpy arraysortednfmap keynfmap getitem d nftranspose numpy reshapenfarray lennfarray 1 check for any features that are not attested in traintoks unattested setnumpy nonzeroempiricalffreq 00 build the classifier start with weight0 for each attested feature and weightinfinity for each unattested feature weights numpy zeroslenempiricalffreq d for fid in unattested weightsfid numpy ninf classifier conditionalexponentialclassifierencoding weights if trace 0 print training d iterations cutoffsmaxiter if trace 2 print print iteration log likelihood accuracy print train the classifier try while true if trace 2 ll cutoffchecker ll or loglikelihoodclassifier traintoks acc cutoffchecker acc or accuracyclassifier traintoks iternum cutoffchecker iter print 9d 14 5f 9 3f iternum ll acc calculate the deltas for this iteration using newton s method deltas calculatedeltas traintoks classifier unattested empiricalffreq nfmap nfarray nftranspose encoding use the deltas to update our weights weights classifier weights weights deltas classifier setweightsweights check the loglikelihood accuracy cutoffs if cutoffchecker checkclassifier traintoks break except keyboardinterrupt print training stopped keyboard interrupt except raise if trace 2 ll loglikelihoodclassifier traintoks acc accuracyclassifier traintoks printf final ll 14 5f acc 9 3f return the classifier return classifier def calculatenfmaptraintoks encoding map from nf to indices this allows us to use smaller arrays nfset set for tok in traintoks for label in encoding labels nfset addsumval for id val in encoding encodetok label return nf i for i nf in enumeratenfset def calculatedeltas traintoks classifier unattested ffreqempirical nfmap nfarray nftranspose encoding r calculate the update values for the classifier weights for this iteration of iis these update weights are the value of delta that solves the equation ffreqempiricali sumfs l classifier probclassifyfs probl featurevectorfs li expdeltai nffeaturevectorfs l where fs l is a featureset label tuple from traintoks featurevectorfs l encoding encodefs l nfvector sumval for id val in vector this method uses newton s method to solve this equation for deltai in particular it starts with a guess of deltai 1 and iteratively updates delta with deltai ffreqempiricali sum1isum2i until convergence where sum1 and sum2 are defined as sum1idelta sumfs l fifs l delta sum2idelta sumfs l fifs l delta nffeaturevectorfs l fifs l delta classifier probclassifyfs probl featurevectorfs li expdeltai nffeaturevectorfs l note that sum1 and sum2 depend on delta so they need to be recomputed each iteration the variables nfmap nfarray and nftranspose are used to generate a dense encoding for nfltext this allows deltas to calculate sum1 and sum2 using matrices which yields a significant performance improvement param traintoks the set of training tokens type traintoks listtupledict str param classifier the current classifier type classifier classifieri param ffreqempirical an array containing the empirical frequency for each feature the i th element of this array is the empirical frequency for feature i type ffreqempirical sequence of float param unattested an array that is 1 for features that are not attested in the training data and 0 for features that are attested in other words unattestedi0 iff ffreqempiricali0 type unattested sequence of int param nfmap a map that can be used to compress nf to a dense vector type nfmap dictint int param nfarray an array that can be used to uncompress nf from a dense vector type nfarray arrayfloat param nftranspose the transpose of nfarray type nftranspose arrayfloat these parameters control when we decide that we ve converged it probably should be possible to set these manually via keyword arguments to train precompute the a matrix anfid sum pfs plabelfs ffs label over all label fs s t numfeatureslabel fsnf generate the feature vector find the number of active features update the a matrix iteratively solve for delta use the following variables nfdeltaxy nfarrayx deltay expnfdeltaxy expnfx deltay nfexpnfdeltaxy nfx expnfx deltay sum1inf sum pfsplabelfsfilabel fs expdeltainf sum2inf sum pfsplabelfsfilabel fs nf expdeltainf avoid division by zero update the deltas we can stop once we converge classifier trainer megam xx possible extension add support for using implicit file format this would need to put requirements on what encoding is used but we may need this for other maxent classifier trainers that require implicit formats anyway train a new conditionalexponentialclassifier using the given training samples using the external megam library this conditionalexponentialclassifier will encode the model that maximizes entropy from all the models that are empirically consistent with traintoks see trainmaxentclassifier for parameter descriptions see nltk classify megam construct an encoding from the training data count cutoff can also be controlled by megam with the minfc option not sure where the best place for it is write a training file for megam run megam on the training file lambda is just the precision of the gaussian prior i e it s the inverse variance so the parameter conversion is 1 0sigma2 see https users umiacs umd eduhaldocsdaume04cgbfgs pdf xx this is actually a perplexity delta not a log likelihood delta print megami686 opt joinoptions delete the training file parse the generated weight vector convert from basee to base2 weights build the classifier classifier trainer tadm construct an encoding from the training data convert from basee to base2 weights build the classifier demo natural language toolkit maximum entropy classifiers c 2001 2023 nltk project edward loper edloper gmail com dmitry chichkov dchichkov gmail com typedmaxentfeatureencoding url https www nltk org for license information see license txt a classifier model based on maximum entropy modeling framework this framework considers all of the probability distributions that are empirically consistent with the training data and chooses the distribution with the highest entropy a probability distribution is empirically consistent with a set of training data if its estimated frequency with which a class and a feature vector value co occur is equal to the actual frequency in the data terminology feature the term feature is usually used to refer to some property of an unlabeled token for example when performing word sense disambiguation we might define a prevword feature whose value is the word preceding the target word however in the context of maxent modeling the term feature is typically used to refer to a property of a labeled token in order to prevent confusion we will introduce two distinct terms to disambiguate these two different concepts an input feature is a property of an unlabeled token a joint feature is a property of a labeled token in the rest of the nltk classify module the term features is used to refer to what we will call input features in this module in literature that describes and discusses maximum entropy models input features are typically called contexts and joint features are simply referred to as features converting input features to joint features in maximum entropy models joint features are required to have numeric values typically each input feature input_feat is mapped to a set of joint features of the form joint_feat token label 1 if input_feat token feat_val and label some_label 0 otherwise for all values of feat_val and some_label this mapping is performed by classes that implement the maxentfeatureencodingi interface classifier model a maximum entropy classifier also known as a conditional exponential classifier this classifier is parameterized by a set of weights which are used to combine the joint features that are generated from a featureset by an encoding in particular the encoding maps each featureset label pair to a vector the probability of each label is then computed using the following equation dotprod weights encode fs label prob fs label sum dotprod weights encode fs l for l in labels where dotprod is the dot product dotprod a b sum x y for x y in zip a b construct a new maxent classifier model typically new classifier models are created using the train method type encoding maxentfeatureencodingi param encoding an encoding that is used to convert the featuresets that are given to the classify method into joint feature vectors which are used by the maxent classifier model type weights list of float param weights the feature weight vector for this classifier type logarithmic bool param logarithmic if false then use non logarithmic weights self _logarithmic false set the feature weight vector for this classifier param new_weights the new feature weight vector type new_weights list of float return the feature weight vector for this classifier rtype list of float normalize the dictionary to give a probability distribution print a table showing the effect of each of the features in the given feature set and how they combine to determine the probabilities of each label for that featureset hack hack generates the ranked list of informative features from most to least param show all neg or pos for negative only or positive only type show str param n the no of top features type n int use none the full list of ranked features a list of the algorithm names that are accepted for the train method s algorithm parameter train a new maxent classifier based on the given corpus of training samples this classifier will have its weights chosen to maximize entropy while remaining empirically consistent with the training corpus rtype maxentclassifier return the new maxent classifier type train_toks list param train_toks training data represented as a list of pairs the first member of which is a featureset and the second of which is a classification label type algorithm str param algorithm a case insensitive string specifying which algorithm should be used to train the classifier the following algorithms are currently available iterative scaling methods generalized iterative scaling gis improved iterative scaling iis external libraries requiring megam lm bfgs algorithm with training performed by megam megam the default algorithm is iis type trace int param trace the level of diagnostic tracing output to produce higher values produce more verbose output type encoding maxentfeatureencodingi param encoding a feature encoding used to convert featuresets into feature vectors if none is specified then a binarymaxentfeatureencoding will be built based on the features that are attested in the training corpus type labels list str param labels the set of possible labels if none is given then the set of all labels attested in the training data will be used instead param gaussian_prior_sigma the sigma value for a gaussian prior on model weights currently this is supported by megam for other algorithms its value is ignored param cutoffs arguments specifying various conditions under which the training should be halted some of the cutoff conditions are not supported by some algorithms max_iter v terminate after v iterations min_ll v terminate after the negative average log likelihood drops under v min_lldelta v terminate if a single iteration improves log likelihood by less than v alias for maxentclassifier feature encodings a mapping that converts a set of input feature values to a vector of joint feature values given a label this conversion is necessary to translate featuresets into a format that can be used by maximum entropy models the set of joint features used by a given encoding is fixed and each index in the generated joint feature vectors corresponds to a single joint feature the length of the generated joint feature vectors is therefore constant for a given encoding because the joint feature vectors generated by maxentfeatureencodingi are typically very sparse they are represented as a list of index value tuples specifying the value of each non zero joint feature feature encodings are generally created using the train method which generates an appropriate encoding based on the input feature values and labels that are present in a given corpus given a featureset label pair return the corresponding vector of joint feature values this vector is represented as a list of index value tuples specifying the value of each non zero joint feature type featureset dict rtype list tuple int int return the size of the fixed length joint feature vectors that are generated by this encoding rtype int return a list of the known labels i e all labels l such that self encode fs l can be a nonzero joint feature vector for some value of fs rtype list return a string describing the value of the joint feature whose index in the generated feature vectors is fid rtype str construct and return new feature encoding based on a given training corpus train_toks type train_toks list tuple dict str param train_toks training data represented as a list of pairs the first member of which is a feature dictionary and the second of which is a classification label a feature encoding that calls a user supplied function to map a given featureset label pair to a sparse joint feature vector construct a new feature encoding based on the given function type func callable param func a function that takes two arguments a featureset and a label and returns the sparse joint feature vector that encodes them func featureset label feature_vector this sparse joint feature vector feature_vector is a list of index value tuples type length int param length the size of the fixed length joint feature vectors that are generated by this encoding type labels list param labels a list of the known labels for this encoding i e all labels l such that self encode fs l can be a nonzero joint feature vector for some value of fs a feature encoding that generates vectors containing a binary joint features of the form joint_feat fs l 1 if fs fname fval and l label 0 otherwise where fname is the name of an input feature fval is a value for that input feature and label is a label typically these features are constructed based on a training corpus using the train method this method will create one feature for each combination of fname fval and label that occurs at least once in the training corpus the unseen_features parameter can be used to add unseen value features which are used whenever an input feature has a value that was not encountered in the training corpus these features have the form joint_feat fs l 1 if is_unseen fname fs fname and l label 0 otherwise where is_unseen fname fval is true if the encoding does not contain any joint features that are true when fs fname fval the alwayson_features parameter can be used to add always on features which have the form joint_feat fs l 1 if l label 0 otherwise these always on features allow the maxent model to directly model the prior probabilities of each label param labels a list of the known labels for this encoding param mapping a dictionary mapping from fname fval label tuples to corresponding joint feature indexes these indexes must be the set of integers from 0 len mapping if mapping fname fval label id then self encode fname fval label id is 1 otherwise it is 0 param unseen_features if true then include unseen value features in the generated joint feature vectors param alwayson_features if true then include always on features in the generated joint feature vectors a list of attested labels dict mapping from fname fval label fid the length of generated joint feature vectors dict mapping from label fid dict mapping from fname fid inherit docs convert input features to joint features known feature name value otherwise we might want to fire an unseen value feature have we seen this fname fval combination with any label we ve seen this fname fval combo we haven t fire the unseen value feature add always on features inherit docs inherit docs inherit docs construct and return new feature encoding based on a given training corpus train_toks see the class description binarymaxentfeatureencoding for a description of the joint features that will be included in this encoding type train_toks list tuple dict str param train_toks training data represented as a list of pairs the first member of which is a feature dictionary and the second of which is a classification label type count_cutoff int param count_cutoff a cutoff value that is used to discard rare joint features if a joint feature s value is 1 fewer than count_cutoff times in the training corpus then that joint feature is not included in the generated encoding type labels list param labels a list of labels that should be used by the classifier if not specified then the set of labels attested in train_toks will be used param options extra parameters for the constructor such as unseen_features and alwayson_features maps fname fval label fid the set of labels we ve encountered maps fname fval count record each of the features if a count cutoff is given then only add a joint feature once the corresponding fname fval label tuple exceeds that cutoff a binary feature encoding which adds one new joint feature to the joint features defined by binarymaxentfeatureencoding a correction feature whose value is chosen to ensure that the sparse vector always sums to a constant non negative number this new feature is used to ensure two preconditions for the gis training algorithm at least one feature vector index must be nonzero for every token the feature vector must sum to a constant non negative number for every token param c the correction constant the value of the correction feature is based on this value in particular its value is c sum v for f v in encoding seealso binarymaxentfeatureencoding __init__ the non negative constant that all encoded feature vectors will sum to get the basic encoding add a correction feature return the result this gets read twice so compute the values in case it s lazy a feature encoding that generates vectors containing integer float and binary joint features of the form binary for string and boolean features joint_feat fs l 1 if fs fname fval and l label 0 otherwise value for integer and float features joint_feat fs l fval if fs fname type fval and l label not encoded otherwise where fname is the name of an input feature fval is a value for that input feature and label is a label typically these features are constructed based on a training corpus using the train method for string and boolean features type fval not in int float this method will create one feature for each combination of fname fval and label that occurs at least once in the training corpus for integer and float features type fval in int float this method will create one feature for each combination of fname and label that occurs at least once in the training corpus for binary features the unseen_features parameter can be used to add unseen value features which are used whenever an input feature has a value that was not encountered in the training corpus these features have the form joint_feat fs l 1 if is_unseen fname fs fname and l label 0 otherwise where is_unseen fname fval is true if the encoding does not contain any joint features that are true when fs fname fval the alwayson_features parameter can be used to add always on features which have the form joint_feat fs l 1 if l label 0 otherwise these always on features allow the maxent model to directly model the prior probabilities of each label param labels a list of the known labels for this encoding param mapping a dictionary mapping from fname fval label tuples to corresponding joint feature indexes these indexes must be the set of integers from 0 len mapping if mapping fname fval label id then self encode fname fval label id is 1 otherwise it is 0 param unseen_features if true then include unseen value features in the generated joint feature vectors param alwayson_features if true then include always on features in the generated joint feature vectors a list of attested labels dict mapping from fname fval label fid the length of generated joint feature vectors dict mapping from label fid dict mapping from fname fid inherit docs convert input features to joint features known feature name value known feature name value otherwise we might want to fire an unseen value feature have we seen this fname fval combination with any label we ve seen this fname fval combo we haven t fire the unseen value feature add always on features inherit docs inherit docs inherit docs construct and return new feature encoding based on a given training corpus train_toks see the class description typedmaxentfeatureencoding for a description of the joint features that will be included in this encoding note recognized feature values types are int float over types are interpreted as regular binary features type train_toks list tuple dict str param train_toks training data represented as a list of pairs the first member of which is a feature dictionary and the second of which is a classification label type count_cutoff int param count_cutoff a cutoff value that is used to discard rare joint features if a joint feature s value is 1 fewer than count_cutoff times in the training corpus then that joint feature is not included in the generated encoding type labels list param labels a list of labels that should be used by the classifier if not specified then the set of labels attested in train_toks will be used param options extra parameters for the constructor such as unseen_features and alwayson_features maps fname fval label fid the set of labels we ve encountered maps fname fval count record each of the features if a count cutoff is given then only add a joint feature once the corresponding fname fval label tuple exceeds that cutoff classifier trainer generalized iterative scaling train a new conditionalexponentialclassifier using the given training samples using the generalized iterative scaling algorithm this conditionalexponentialclassifier will encode the model that maximizes entropy from all the models that are empirically consistent with train_toks see train_maxent_classifier for parameter descriptions construct an encoding from the training data cinv is the inverse of the sum of each joint feature vector this controls the learning rate higher cinv or lower c gives faster learning count how many times each feature occurs in the training data check for any features that are not attested in train_toks build the classifier start with weight 0 for each attested feature and weight infinity for each unattested feature take the log of the empirical fcount train the classifier use the model to estimate the number of times each feature should occur in the training data take the log of estimated fcount avoid taking log 0 update the classifier weights check the log likelihood accuracy cutoffs return the classifier classifier trainer improved iterative scaling train a new conditionalexponentialclassifier using the given training samples using the improved iterative scaling algorithm this conditionalexponentialclassifier will encode the model that maximizes entropy from all the models that are empirically consistent with train_toks see train_maxent_classifier for parameter descriptions construct an encoding from the training data count how many times each feature occurs in the training data find the nf map and related variables nfarray and nfident nf is the sum of the features for a given labeled text nfmap compresses this sparse set of values to a dense list nfarray performs the reverse operation nfident is nfarray multiplied by an identity matrix check for any features that are not attested in train_toks build the classifier start with weight 0 for each attested feature and weight infinity for each unattested feature train the classifier calculate the deltas for this iteration using newton s method use the deltas to update our weights check the log likelihood accuracy cutoffs return the classifier construct a map that can be used to compress nf which is typically sparse nf feature_vector is the sum of the feature values for feature_vector this represents the number of features that are active for a given labeled text this method finds all values of nf t that are attested for at least one token in the given list of training tokens and constructs a dictionary mapping these attested values to a continuous range 0 n for example if the only values of nf that were attested were 3 5 and 7 then _nfmap might return the dictionary 3 0 5 1 7 2 return a map that can be used to compress nf to a dense vector rtype dict int int map from nf to indices this allows us to use smaller arrays calculate the update values for the classifier weights for this iteration of iis these update weights are the value of delta that solves the equation ffreq_empirical i sum fs l classifier prob_classify fs prob l feature_vector fs l i exp delta i nf feature_vector fs l where fs l is a featureset label tuple from train_toks feature_vector fs l encoding encode fs l nf vector sum val for id val in vector this method uses newton s method to solve this equation for delta i in particular it starts with a guess of delta i 1 and iteratively updates delta with delta i ffreq_empirical i sum1 i sum2 i until convergence where sum1 and sum2 are defined as sum1 i delta sum fs l f i fs l delta sum2 i delta sum fs l f i fs l delta nf feature_vector fs l f i fs l delta classifier prob_classify fs prob l feature_vector fs l i exp delta i nf feature_vector fs l note that sum1 and sum2 depend on delta so they need to be re computed each iteration the variables nfmap nfarray and nftranspose are used to generate a dense encoding for nf ltext this allows _deltas to calculate sum1 and sum2 using matrices which yields a significant performance improvement param train_toks the set of training tokens type train_toks list tuple dict str param classifier the current classifier type classifier classifieri param ffreq_empirical an array containing the empirical frequency for each feature the i th element of this array is the empirical frequency for feature i type ffreq_empirical sequence of float param unattested an array that is 1 for features that are not attested in the training data and 0 for features that are attested in other words unattested i 0 iff ffreq_empirical i 0 type unattested sequence of int param nfmap a map that can be used to compress nf to a dense vector type nfmap dict int int param nfarray an array that can be used to uncompress nf from a dense vector type nfarray array float param nftranspose the transpose of nfarray type nftranspose array float these parameters control when we decide that we ve converged it probably should be possible to set these manually via keyword arguments to train precompute the a matrix a nf id sum p fs p label fs f fs label over all label fs s t num_features label fs nf generate the feature vector find the number of active features update the a matrix iteratively solve for delta use the following variables nf_delta x y nfarray x delta y exp_nf_delta x y exp nf x delta y nf_exp_nf_delta x y nf x exp nf x delta y sum1 i nf sum p fs p label fs f i label fs exp delta i nf sum2 i nf sum p fs p label fs f i label fs nf exp delta i nf avoid division by zero update the deltas we can stop once we converge classifier trainer megam xx possible extension add support for using implicit file format this would need to put requirements on what encoding is used but we may need this for other maxent classifier trainers that require implicit formats anyway train a new conditionalexponentialclassifier using the given training samples using the external megam library this conditionalexponentialclassifier will encode the model that maximizes entropy from all the models that are empirically consistent with train_toks see train_maxent_classifier for parameter descriptions see nltk classify megam construct an encoding from the training data count cutoff can also be controlled by megam with the minfc option not sure where the best place for it is write a training file for megam run megam on the training file lambda is just the precision of the gaussian prior i e it s the inverse variance so the parameter conversion is 1 0 sigma 2 see https users umiacs umd edu hal docs daume04cg bfgs pdf xx this is actually a perplexity delta not a log likelihood delta each possible la print megam_i686 opt join options delete the training file parse the generated weight vector convert from base e to base 2 weights build the classifier classifier trainer tadm construct an encoding from the training data convert from base e to base 2 weights build the classifier demo
try: import numpy except ImportError: pass import os import tempfile from collections import defaultdict from nltk.classify.api import ClassifierI from nltk.classify.megam import call_megam, parse_megam_weights, write_megam_file from nltk.classify.tadm import call_tadm, parse_tadm_weights, write_tadm_file from nltk.classify.util import CutoffChecker, accuracy, log_likelihood from nltk.data import gzip_open_unicode from nltk.probability import DictionaryProbDist from nltk.util import OrderedDict __docformat__ = "epytext en" class MaxentClassifier(ClassifierI): def __init__(self, encoding, weights, logarithmic=True): self._encoding = encoding self._weights = weights self._logarithmic = logarithmic assert encoding.length() == len(weights) def labels(self): return self._encoding.labels() def set_weights(self, new_weights): self._weights = new_weights assert self._encoding.length() == len(new_weights) def weights(self): return self._weights def classify(self, featureset): return self.prob_classify(featureset).max() def prob_classify(self, featureset): prob_dict = {} for label in self._encoding.labels(): feature_vector = self._encoding.encode(featureset, label) if self._logarithmic: total = 0.0 for (f_id, f_val) in feature_vector: total += self._weights[f_id] * f_val prob_dict[label] = total else: prod = 1.0 for (f_id, f_val) in feature_vector: prod *= self._weights[f_id] ** f_val prob_dict[label] = prod return DictionaryProbDist(prob_dict, log=self._logarithmic, normalize=True) def explain(self, featureset, columns=4): descr_width = 50 TEMPLATE = " %-" + str(descr_width - 2) + "s%s%8.3f" pdist = self.prob_classify(featureset) labels = sorted(pdist.samples(), key=pdist.prob, reverse=True) labels = labels[:columns] print( " Feature".ljust(descr_width) + "".join("%8s" % (("%s" % l)[:7]) for l in labels) ) print(" " + "-" * (descr_width - 2 + 8 * len(labels))) sums = defaultdict(int) for i, label in enumerate(labels): feature_vector = self._encoding.encode(featureset, label) feature_vector.sort( key=lambda fid__: abs(self._weights[fid__[0]]), reverse=True ) for (f_id, f_val) in feature_vector: if self._logarithmic: score = self._weights[f_id] * f_val else: score = self._weights[f_id] ** f_val descr = self._encoding.describe(f_id) descr = descr.split(" and label is ")[0] descr += " (%s)" % f_val if len(descr) > 47: descr = descr[:44] + "..." print(TEMPLATE % (descr, i * 8 * " ", score)) sums[label] += score print(" " + "-" * (descr_width - 1 + 8 * len(labels))) print( " TOTAL:".ljust(descr_width) + "".join("%8.3f" % sums[l] for l in labels) ) print( " PROBS:".ljust(descr_width) + "".join("%8.3f" % pdist.prob(l) for l in labels) ) def most_informative_features(self, n=10): if hasattr(self, "_most_informative_features"): return self._most_informative_features[:n] else: self._most_informative_features = sorted( list(range(len(self._weights))), key=lambda fid: abs(self._weights[fid]), reverse=True, ) return self._most_informative_features[:n] def show_most_informative_features(self, n=10, show="all"): fids = self.most_informative_features(None) if show == "pos": fids = [fid for fid in fids if self._weights[fid] > 0] elif show == "neg": fids = [fid for fid in fids if self._weights[fid] < 0] for fid in fids[:n]: print(f"{self._weights[fid]:8.3f} {self._encoding.describe(fid)}") def __repr__(self): return "<ConditionalExponentialClassifier: %d labels, %d features>" % ( len(self._encoding.labels()), self._encoding.length(), ) ALGORITHMS = ["GIS", "IIS", "MEGAM", "TADM"] @classmethod def train( cls, train_toks, algorithm=None, trace=3, encoding=None, labels=None, gaussian_prior_sigma=0, **cutoffs, ): if algorithm is None: algorithm = "iis" for key in cutoffs: if key not in ( "max_iter", "min_ll", "min_lldelta", "max_acc", "min_accdelta", "count_cutoff", "norm", "explicit", "bernoulli", ): raise TypeError("Unexpected keyword arg %r" % key) algorithm = algorithm.lower() if algorithm == "iis": return train_maxent_classifier_with_iis( train_toks, trace, encoding, labels, **cutoffs ) elif algorithm == "gis": return train_maxent_classifier_with_gis( train_toks, trace, encoding, labels, **cutoffs ) elif algorithm == "megam": return train_maxent_classifier_with_megam( train_toks, trace, encoding, labels, gaussian_prior_sigma, **cutoffs ) elif algorithm == "tadm": kwargs = cutoffs kwargs["trace"] = trace kwargs["encoding"] = encoding kwargs["labels"] = labels kwargs["gaussian_prior_sigma"] = gaussian_prior_sigma return TadmMaxentClassifier.train(train_toks, **kwargs) else: raise ValueError("Unknown algorithm %s" % algorithm) ConditionalExponentialClassifier = MaxentClassifier class MaxentFeatureEncodingI: def encode(self, featureset, label): raise NotImplementedError() def length(self): raise NotImplementedError() def labels(self): raise NotImplementedError() def describe(self, fid): raise NotImplementedError() def train(cls, train_toks): raise NotImplementedError() class FunctionBackedMaxentFeatureEncoding(MaxentFeatureEncodingI): def __init__(self, func, length, labels): self._length = length self._func = func self._labels = labels def encode(self, featureset, label): return self._func(featureset, label) def length(self): return self._length def labels(self): return self._labels def describe(self, fid): return "no description available" class BinaryMaxentFeatureEncoding(MaxentFeatureEncodingI): def __init__(self, labels, mapping, unseen_features=False, alwayson_features=False): if set(mapping.values()) != set(range(len(mapping))): raise ValueError( "Mapping values must be exactly the " "set of integers from 0...len(mapping)" ) self._labels = list(labels) self._mapping = mapping self._length = len(mapping) self._alwayson = None self._unseen = None if alwayson_features: self._alwayson = { label: i + self._length for (i, label) in enumerate(labels) } self._length += len(self._alwayson) if unseen_features: fnames = {fname for (fname, fval, label) in mapping} self._unseen = {fname: i + self._length for (i, fname) in enumerate(fnames)} self._length += len(fnames) def encode(self, featureset, label): encoding = [] for fname, fval in featureset.items(): if (fname, fval, label) in self._mapping: encoding.append((self._mapping[fname, fval, label], 1)) elif self._unseen: for label2 in self._labels: if (fname, fval, label2) in self._mapping: break else: if fname in self._unseen: encoding.append((self._unseen[fname], 1)) if self._alwayson and label in self._alwayson: encoding.append((self._alwayson[label], 1)) return encoding def describe(self, f_id): if not isinstance(f_id, int): raise TypeError("describe() expected an int") try: self._inv_mapping except AttributeError: self._inv_mapping = [-1] * len(self._mapping) for (info, i) in self._mapping.items(): self._inv_mapping[i] = info if f_id < len(self._mapping): (fname, fval, label) = self._inv_mapping[f_id] return f"{fname}=={fval!r} and label is {label!r}" elif self._alwayson and f_id in self._alwayson.values(): for (label, f_id2) in self._alwayson.items(): if f_id == f_id2: return "label is %r" % label elif self._unseen and f_id in self._unseen.values(): for (fname, f_id2) in self._unseen.items(): if f_id == f_id2: return "%s is unseen" % fname else: raise ValueError("Bad feature id") def labels(self): return self._labels def length(self): return self._length @classmethod def train(cls, train_toks, count_cutoff=0, labels=None, **options): mapping = {} seen_labels = set() count = defaultdict(int) for (tok, label) in train_toks: if labels and label not in labels: raise ValueError("Unexpected label %s" % label) seen_labels.add(label) for (fname, fval) in tok.items(): count[fname, fval] += 1 if count[fname, fval] >= count_cutoff: if (fname, fval, label) not in mapping: mapping[fname, fval, label] = len(mapping) if labels is None: labels = seen_labels return cls(labels, mapping, **options) class GISEncoding(BinaryMaxentFeatureEncoding): def __init__( self, labels, mapping, unseen_features=False, alwayson_features=False, C=None ): BinaryMaxentFeatureEncoding.__init__( self, labels, mapping, unseen_features, alwayson_features ) if C is None: C = len({fname for (fname, fval, label) in mapping}) + 1 self._C = C @property def C(self): return self._C def encode(self, featureset, label): encoding = BinaryMaxentFeatureEncoding.encode(self, featureset, label) base_length = BinaryMaxentFeatureEncoding.length(self) total = sum(v for (f, v) in encoding) if total >= self._C: raise ValueError("Correction feature is not high enough!") encoding.append((base_length, self._C - total)) return encoding def length(self): return BinaryMaxentFeatureEncoding.length(self) + 1 def describe(self, f_id): if f_id == BinaryMaxentFeatureEncoding.length(self): return "Correction feature (%s)" % self._C else: return BinaryMaxentFeatureEncoding.describe(self, f_id) class TadmEventMaxentFeatureEncoding(BinaryMaxentFeatureEncoding): def __init__(self, labels, mapping, unseen_features=False, alwayson_features=False): self._mapping = OrderedDict(mapping) self._label_mapping = OrderedDict() BinaryMaxentFeatureEncoding.__init__( self, labels, self._mapping, unseen_features, alwayson_features ) def encode(self, featureset, label): encoding = [] for feature, value in featureset.items(): if (feature, label) not in self._mapping: self._mapping[(feature, label)] = len(self._mapping) if value not in self._label_mapping: if not isinstance(value, int): self._label_mapping[value] = len(self._label_mapping) else: self._label_mapping[value] = value encoding.append( (self._mapping[(feature, label)], self._label_mapping[value]) ) return encoding def labels(self): return self._labels def describe(self, fid): for (feature, label) in self._mapping: if self._mapping[(feature, label)] == fid: return (feature, label) def length(self): return len(self._mapping) @classmethod def train(cls, train_toks, count_cutoff=0, labels=None, **options): mapping = OrderedDict() if not labels: labels = [] train_toks = list(train_toks) for (featureset, label) in train_toks: if label not in labels: labels.append(label) for (featureset, label) in train_toks: for label in labels: for feature in featureset: if (feature, label) not in mapping: mapping[(feature, label)] = len(mapping) return cls(labels, mapping, **options) class TypedMaxentFeatureEncoding(MaxentFeatureEncodingI): def __init__(self, labels, mapping, unseen_features=False, alwayson_features=False): if set(mapping.values()) != set(range(len(mapping))): raise ValueError( "Mapping values must be exactly the " "set of integers from 0...len(mapping)" ) self._labels = list(labels) self._mapping = mapping self._length = len(mapping) self._alwayson = None self._unseen = None if alwayson_features: self._alwayson = { label: i + self._length for (i, label) in enumerate(labels) } self._length += len(self._alwayson) if unseen_features: fnames = {fname for (fname, fval, label) in mapping} self._unseen = {fname: i + self._length for (i, fname) in enumerate(fnames)} self._length += len(fnames) def encode(self, featureset, label): encoding = [] for fname, fval in featureset.items(): if isinstance(fval, (int, float)): if (fname, type(fval), label) in self._mapping: encoding.append((self._mapping[fname, type(fval), label], fval)) else: if (fname, fval, label) in self._mapping: encoding.append((self._mapping[fname, fval, label], 1)) elif self._unseen: for label2 in self._labels: if (fname, fval, label2) in self._mapping: break else: if fname in self._unseen: encoding.append((self._unseen[fname], 1)) if self._alwayson and label in self._alwayson: encoding.append((self._alwayson[label], 1)) return encoding def describe(self, f_id): if not isinstance(f_id, int): raise TypeError("describe() expected an int") try: self._inv_mapping except AttributeError: self._inv_mapping = [-1] * len(self._mapping) for (info, i) in self._mapping.items(): self._inv_mapping[i] = info if f_id < len(self._mapping): (fname, fval, label) = self._inv_mapping[f_id] return f"{fname}=={fval!r} and label is {label!r}" elif self._alwayson and f_id in self._alwayson.values(): for (label, f_id2) in self._alwayson.items(): if f_id == f_id2: return "label is %r" % label elif self._unseen and f_id in self._unseen.values(): for (fname, f_id2) in self._unseen.items(): if f_id == f_id2: return "%s is unseen" % fname else: raise ValueError("Bad feature id") def labels(self): return self._labels def length(self): return self._length @classmethod def train(cls, train_toks, count_cutoff=0, labels=None, **options): mapping = {} seen_labels = set() count = defaultdict(int) for (tok, label) in train_toks: if labels and label not in labels: raise ValueError("Unexpected label %s" % label) seen_labels.add(label) for (fname, fval) in tok.items(): if type(fval) in (int, float): fval = type(fval) count[fname, fval] += 1 if count[fname, fval] >= count_cutoff: if (fname, fval, label) not in mapping: mapping[fname, fval, label] = len(mapping) if labels is None: labels = seen_labels return cls(labels, mapping, **options) def train_maxent_classifier_with_gis( train_toks, trace=3, encoding=None, labels=None, **cutoffs ): cutoffs.setdefault("max_iter", 100) cutoffchecker = CutoffChecker(cutoffs) if encoding is None: encoding = GISEncoding.train(train_toks, labels=labels) if not hasattr(encoding, "C"): raise TypeError( "The GIS algorithm requires an encoding that " "defines C (e.g., GISEncoding)." ) Cinv = 1.0 / encoding.C empirical_fcount = calculate_empirical_fcount(train_toks, encoding) unattested = set(numpy.nonzero(empirical_fcount == 0)[0]) weights = numpy.zeros(len(empirical_fcount), "d") for fid in unattested: weights[fid] = numpy.NINF classifier = ConditionalExponentialClassifier(encoding, weights) log_empirical_fcount = numpy.log2(empirical_fcount) del empirical_fcount if trace > 0: print(" ==> Training (%d iterations)" % cutoffs["max_iter"]) if trace > 2: print() print(" Iteration Log Likelihood Accuracy") print(" ---------------------------------------") try: while True: if trace > 2: ll = cutoffchecker.ll or log_likelihood(classifier, train_toks) acc = cutoffchecker.acc or accuracy(classifier, train_toks) iternum = cutoffchecker.iter print(" %9d %14.5f %9.3f" % (iternum, ll, acc)) estimated_fcount = calculate_estimated_fcount( classifier, train_toks, encoding ) for fid in unattested: estimated_fcount[fid] += 1 log_estimated_fcount = numpy.log2(estimated_fcount) del estimated_fcount weights = classifier.weights() weights += (log_empirical_fcount - log_estimated_fcount) * Cinv classifier.set_weights(weights) if cutoffchecker.check(classifier, train_toks): break except KeyboardInterrupt: print(" Training stopped: keyboard interrupt") except: raise if trace > 2: ll = log_likelihood(classifier, train_toks) acc = accuracy(classifier, train_toks) print(f" Final {ll:14.5f} {acc:9.3f}") return classifier def calculate_empirical_fcount(train_toks, encoding): fcount = numpy.zeros(encoding.length(), "d") for tok, label in train_toks: for (index, val) in encoding.encode(tok, label): fcount[index] += val return fcount def calculate_estimated_fcount(classifier, train_toks, encoding): fcount = numpy.zeros(encoding.length(), "d") for tok, label in train_toks: pdist = classifier.prob_classify(tok) for label in pdist.samples(): prob = pdist.prob(label) for (fid, fval) in encoding.encode(tok, label): fcount[fid] += prob * fval return fcount def train_maxent_classifier_with_iis( train_toks, trace=3, encoding=None, labels=None, **cutoffs ): cutoffs.setdefault("max_iter", 100) cutoffchecker = CutoffChecker(cutoffs) if encoding is None: encoding = BinaryMaxentFeatureEncoding.train(train_toks, labels=labels) empirical_ffreq = calculate_empirical_fcount(train_toks, encoding) / len(train_toks) nfmap = calculate_nfmap(train_toks, encoding) nfarray = numpy.array(sorted(nfmap, key=nfmap.__getitem__), "d") nftranspose = numpy.reshape(nfarray, (len(nfarray), 1)) unattested = set(numpy.nonzero(empirical_ffreq == 0)[0]) weights = numpy.zeros(len(empirical_ffreq), "d") for fid in unattested: weights[fid] = numpy.NINF classifier = ConditionalExponentialClassifier(encoding, weights) if trace > 0: print(" ==> Training (%d iterations)" % cutoffs["max_iter"]) if trace > 2: print() print(" Iteration Log Likelihood Accuracy") print(" ---------------------------------------") try: while True: if trace > 2: ll = cutoffchecker.ll or log_likelihood(classifier, train_toks) acc = cutoffchecker.acc or accuracy(classifier, train_toks) iternum = cutoffchecker.iter print(" %9d %14.5f %9.3f" % (iternum, ll, acc)) deltas = calculate_deltas( train_toks, classifier, unattested, empirical_ffreq, nfmap, nfarray, nftranspose, encoding, ) weights = classifier.weights() weights += deltas classifier.set_weights(weights) if cutoffchecker.check(classifier, train_toks): break except KeyboardInterrupt: print(" Training stopped: keyboard interrupt") except: raise if trace > 2: ll = log_likelihood(classifier, train_toks) acc = accuracy(classifier, train_toks) print(f" Final {ll:14.5f} {acc:9.3f}") return classifier def calculate_nfmap(train_toks, encoding): nfset = set() for tok, _ in train_toks: for label in encoding.labels(): nfset.add(sum(val for (id, val) in encoding.encode(tok, label))) return {nf: i for (i, nf) in enumerate(nfset)} def calculate_deltas( train_toks, classifier, unattested, ffreq_empirical, nfmap, nfarray, nftranspose, encoding, ): r NEWTON_CONVERGE = 1e-12 MAX_NEWTON = 300 deltas = numpy.ones(encoding.length(), "d") A = numpy.zeros((len(nfmap), encoding.length()), "d") for tok, label in train_toks: dist = classifier.prob_classify(tok) for label in encoding.labels(): feature_vector = encoding.encode(tok, label) nf = sum(val for (id, val) in feature_vector) for (id, val) in feature_vector: A[nfmap[nf], id] += dist.prob(label) * val A /= len(train_toks) for rangenum in range(MAX_NEWTON): nf_delta = numpy.outer(nfarray, deltas) exp_nf_delta = 2**nf_delta nf_exp_nf_delta = nftranspose * exp_nf_delta sum1 = numpy.sum(exp_nf_delta * A, axis=0) sum2 = numpy.sum(nf_exp_nf_delta * A, axis=0) for fid in unattested: sum2[fid] += 1 deltas -= (ffreq_empirical - sum1) / -sum2 n_error = numpy.sum(abs(ffreq_empirical - sum1)) / numpy.sum(abs(deltas)) if n_error < NEWTON_CONVERGE: return deltas return deltas def train_maxent_classifier_with_megam( train_toks, trace=3, encoding=None, labels=None, gaussian_prior_sigma=0, **kwargs ): explicit = True bernoulli = True if "explicit" in kwargs: explicit = kwargs["explicit"] if "bernoulli" in kwargs: bernoulli = kwargs["bernoulli"] if encoding is None: count_cutoff = kwargs.get("count_cutoff", 0) encoding = BinaryMaxentFeatureEncoding.train( train_toks, count_cutoff, labels=labels, alwayson_features=True ) elif labels is not None: raise ValueError("Specify encoding or labels, not both") try: fd, trainfile_name = tempfile.mkstemp(prefix="nltk-") with open(trainfile_name, "w") as trainfile: write_megam_file( train_toks, encoding, trainfile, explicit=explicit, bernoulli=bernoulli ) os.close(fd) except (OSError, ValueError) as e: raise ValueError("Error while creating megam training file: %s" % e) from e options = [] options += ["-nobias", "-repeat", "10"] if explicit: options += ["-explicit"] if not bernoulli: options += ["-fvals"] if gaussian_prior_sigma: inv_variance = 1.0 / gaussian_prior_sigma**2 else: inv_variance = 0 options += ["-lambda", "%.2f" % inv_variance, "-tune"] if trace < 3: options += ["-quiet"] if "max_iter" in kwargs: options += ["-maxi", "%s" % kwargs["max_iter"]] if "ll_delta" in kwargs: options += ["-dpp", "%s" % abs(kwargs["ll_delta"])] if hasattr(encoding, "cost"): options += ["-multilabel"] options += ["multiclass", trainfile_name] stdout = call_megam(options) try: os.remove(trainfile_name) except OSError as e: print(f"Warning: unable to delete {trainfile_name}: {e}") weights = parse_megam_weights(stdout, encoding.length(), explicit) weights *= numpy.log2(numpy.e) return MaxentClassifier(encoding, weights) class TadmMaxentClassifier(MaxentClassifier): @classmethod def train(cls, train_toks, **kwargs): algorithm = kwargs.get("algorithm", "tao_lmvm") trace = kwargs.get("trace", 3) encoding = kwargs.get("encoding", None) labels = kwargs.get("labels", None) sigma = kwargs.get("gaussian_prior_sigma", 0) count_cutoff = kwargs.get("count_cutoff", 0) max_iter = kwargs.get("max_iter") ll_delta = kwargs.get("min_lldelta") if not encoding: encoding = TadmEventMaxentFeatureEncoding.train( train_toks, count_cutoff, labels=labels ) trainfile_fd, trainfile_name = tempfile.mkstemp( prefix="nltk-tadm-events-", suffix=".gz" ) weightfile_fd, weightfile_name = tempfile.mkstemp(prefix="nltk-tadm-weights-") trainfile = gzip_open_unicode(trainfile_name, "w") write_tadm_file(train_toks, encoding, trainfile) trainfile.close() options = [] options.extend(["-monitor"]) options.extend(["-method", algorithm]) if sigma: options.extend(["-l2", "%.6f" % sigma**2]) if max_iter: options.extend(["-max_it", "%d" % max_iter]) if ll_delta: options.extend(["-fatol", "%.6f" % abs(ll_delta)]) options.extend(["-events_in", trainfile_name]) options.extend(["-params_out", weightfile_name]) if trace < 3: options.extend(["2>&1"]) else: options.extend(["-summary"]) call_tadm(options) with open(weightfile_name) as weightfile: weights = parse_tadm_weights(weightfile) os.remove(trainfile_name) os.remove(weightfile_name) weights *= numpy.log2(numpy.e) return cls(encoding, weights) def demo(): from nltk.classify.util import names_demo classifier = names_demo(MaxentClassifier.train) if __name__ == "__main__": demo()
natural language toolkit naive bayes classifiers c 20012023 nltk project edward loper edlopergmail com url https www nltk org for license information see license txt a classifier based on the naive bayes algorithm in order to find the probability for a label this algorithm first uses the bayes rule to express plabelfeatures in terms of plabel and pfeatureslabel plabel pfeatureslabel plabelfeatures pfeatures the algorithm then makes the naive assumption that all features are independent given the label plabel pf1label pfnlabel plabelfeatures pfeatures rather than computing pfeatures explicitly the algorithm just calculates the numerator for each label and normalizes them so they sum to one plabel pf1label pfnlabel plabelfeatures suml pl pf1l pfnl naive bayes classifier a naive bayes classifier naive bayes classifiers are paramaterized by two probability distributions plabel gives the probability that an input will receive each label given no information about the input s features pfnamefvallabel gives the probability that a given feature fname will receive a given value fval given that the label label if the classifier encounters an input with a feature that has never been seen with any label then rather than assigning a probability of 0 to all labels it will ignore that feature the feature value none is reserved for unseen feature values you generally should not use none as a feature value for one of your own features param labelprobdist plabel the probability distribution over labels it is expressed as a probdisti whose samples are labels i e plabel labelprobdist problabel param featureprobdist pfnamefvallabel the probability distribution for feature values given labels it is expressed as a dictionary whose keys are label fname pairs and whose values are probdisti objects over feature values i e pfnamefvallabel featureprobdistlabel fname probfval if a given label fname is not a key in featureprobdist then it is assumed that the corresponding pfnamefvallabel is 0 for all values of fval discard any feature names that we ve never seen before otherwise we ll just assign a probability of 0 to everything print ignoring unseen feature s fname find the log probability of each label given the features start with the log probability of the label itself then add in the log probability of features given labels nb this case will never come up if the classifier was naivebayesclassifier train determine the most relevant features and display them return a list of the most informative features used by this classifier for the purpose of this function the informativeness of a feature fname fval is equal to the highest value of pfnamefvallabel for any label divided by the lowest value of pfnamefvallabel for any label max pfnamefvallabel1 pfnamefvallabel2 the set of fname fval pairs used by this classifier the max min probability associated w each fname fval pair maps fname fval float convert features to a list sort it by how informative features are param labeledfeaturesets a list of classified featuresets i e a list of tuples featureset label count up how many times each feature value occurred given the label and featurename increment freqfvallabel fname record that fname can take the value fval keep a list of all feature names if a feature didn t have a value given for an instance then we assume that it gets the implicit value none this loop counts up the number of missing feature values for each label fname pair and increments the count of the fval none by that amount only add a none key when necessary i e if there are any samples with feature fname missing create the plabel distribution create the pfvallabel fname distribution demo natural language toolkit naive bayes classifiers c 2001 2023 nltk project edward loper edloper gmail com url https www nltk org for license information see license txt a classifier based on the naive bayes algorithm in order to find the probability for a label this algorithm first uses the bayes rule to express p label features in terms of p label and p features label p label p features label p label features p features the algorithm then makes the naive assumption that all features are independent given the label p label p f1 label p fn label p label features p features rather than computing p features explicitly the algorithm just calculates the numerator for each label and normalizes them so they sum to one p label p f1 label p fn label p label features sum l p l p f1 l p fn l naive bayes classifier a naive bayes classifier naive bayes classifiers are paramaterized by two probability distributions p label gives the probability that an input will receive each label given no information about the input s features p fname fval label gives the probability that a given feature fname will receive a given value fval given that the label label if the classifier encounters an input with a feature that has never been seen with any label then rather than assigning a probability of 0 to all labels it will ignore that feature the feature value none is reserved for unseen feature values you generally should not use none as a feature value for one of your own features param label_probdist p label the probability distribution over labels it is expressed as a probdisti whose samples are labels i e p label label_probdist prob label param feature_probdist p fname fval label the probability distribution for feature values given labels it is expressed as a dictionary whose keys are label fname pairs and whose values are probdisti objects over feature values i e p fname fval label feature_probdist label fname prob fval if a given label fname is not a key in feature_probdist then it is assumed that the corresponding p fname fval label is 0 for all values of fval discard any feature names that we ve never seen before otherwise we ll just assign a probability of 0 to everything print ignoring unseen feature s fname find the log probability of each label given the features start with the log probability of the label itself then add in the log probability of features given labels nb this case will never come up if the classifier was naivebayesclassifier train inf determine the most relevant features and display them return a list of the most informative features used by this classifier for the purpose of this function the informativeness of a feature fname fval is equal to the highest value of p fname fval label for any label divided by the lowest value of p fname fval label for any label max p fname fval label1 p fname fval label2 the set of fname fval pairs used by this classifier the max min probability associated w each fname fval pair maps fname fval float convert features to a list sort it by how informative features are param labeled_featuresets a list of classified featuresets i e a list of tuples featureset label count up how many times each feature value occurred given the label and featurename increment freq fval label fname record that fname can take the value fval keep a list of all feature names if a feature didn t have a value given for an instance then we assume that it gets the implicit value none this loop counts up the number of missing feature values for each label fname pair and increments the count of the fval none by that amount only add a none key when necessary i e if there are any samples with feature fname missing create the p label distribution create the p fval label fname distribution demo
from collections import defaultdict from nltk.classify.api import ClassifierI from nltk.probability import DictionaryProbDist, ELEProbDist, FreqDist, sum_logs class NaiveBayesClassifier(ClassifierI): def __init__(self, label_probdist, feature_probdist): self._label_probdist = label_probdist self._feature_probdist = feature_probdist self._labels = list(label_probdist.samples()) def labels(self): return self._labels def classify(self, featureset): return self.prob_classify(featureset).max() def prob_classify(self, featureset): featureset = featureset.copy() for fname in list(featureset.keys()): for label in self._labels: if (label, fname) in self._feature_probdist: break else: del featureset[fname] logprob = {} for label in self._labels: logprob[label] = self._label_probdist.logprob(label) for label in self._labels: for (fname, fval) in featureset.items(): if (label, fname) in self._feature_probdist: feature_probs = self._feature_probdist[label, fname] logprob[label] += feature_probs.logprob(fval) else: logprob[label] += sum_logs([]) return DictionaryProbDist(logprob, normalize=True, log=True) def show_most_informative_features(self, n=10): cpdist = self._feature_probdist print("Most Informative Features") for (fname, fval) in self.most_informative_features(n): def labelprob(l): return cpdist[l, fname].prob(fval) labels = sorted( (l for l in self._labels if fval in cpdist[l, fname].samples()), key=lambda element: (-labelprob(element), element), reverse=True, ) if len(labels) == 1: continue l0 = labels[0] l1 = labels[-1] if cpdist[l0, fname].prob(fval) == 0: ratio = "INF" else: ratio = "%8.1f" % ( cpdist[l1, fname].prob(fval) / cpdist[l0, fname].prob(fval) ) print( "%24s = %-14r %6s : %-6s = %s : 1.0" % (fname, fval, ("%s" % l1)[:6], ("%s" % l0)[:6], ratio) ) def most_informative_features(self, n=100): if hasattr(self, "_most_informative_features"): return self._most_informative_features[:n] else: features = set() maxprob = defaultdict(lambda: 0.0) minprob = defaultdict(lambda: 1.0) for (label, fname), probdist in self._feature_probdist.items(): for fval in probdist.samples(): feature = (fname, fval) features.add(feature) p = probdist.prob(fval) maxprob[feature] = max(p, maxprob[feature]) minprob[feature] = min(p, minprob[feature]) if minprob[feature] == 0: features.discard(feature) self._most_informative_features = sorted( features, key=lambda feature_: ( minprob[feature_] / maxprob[feature_], feature_[0], feature_[1] in [None, False, True], str(feature_[1]).lower(), ), ) return self._most_informative_features[:n] @classmethod def train(cls, labeled_featuresets, estimator=ELEProbDist): label_freqdist = FreqDist() feature_freqdist = defaultdict(FreqDist) feature_values = defaultdict(set) fnames = set() for featureset, label in labeled_featuresets: label_freqdist[label] += 1 for fname, fval in featureset.items(): feature_freqdist[label, fname][fval] += 1 feature_values[fname].add(fval) fnames.add(fname) for label in label_freqdist: num_samples = label_freqdist[label] for fname in fnames: count = feature_freqdist[label, fname].N() if num_samples - count > 0: feature_freqdist[label, fname][None] += num_samples - count feature_values[fname].add(None) label_probdist = estimator(label_freqdist) feature_probdist = {} for ((label, fname), freqdist) in feature_freqdist.items(): probdist = estimator(freqdist, bins=len(feature_values[fname])) feature_probdist[label, fname] = probdist return cls(label_probdist, feature_probdist) def demo(): from nltk.classify.util import names_demo classifier = names_demo(NaiveBayesClassifier.train) classifier.show_most_informative_features() if __name__ == "__main__": demo()
natural language toolkit positive naive bayes classifier c 2012 nltk project alessandro presta alessandro prestagmail com url https www nltk org for license information see license txt a variant of the naive bayes classifier that performs binary classification with partiallylabeled training sets in other words assume we want to build a classifier that assigns each example to one of two complementary classes e g male names and female names if we have a training set with labeled examples for both classes we can use a standard naive bayes classifier however consider the case when we only have labeled examples for one of the classes and other unlabeled examples then assuming a prior distribution on the two labels we can use the unlabeled set to estimate the frequencies of the various features let the two possible labels be 1 and 0 and let s say we only have examples labeled 1 and unlabeled examples we are also given an estimate of p1 we compute pfeature1 exactly as in the standard case to compute pfeature0 we first estimate pfeature from the unlabeled set we are assuming that the unlabeled examples are drawn according to the given prior distribution and then express the conditional probability as pfeature pfeature1 p1 pfeature0 p0 example from nltk classify import positivenaivebayesclassifier some sentences about sports sportssentences the team dominated the game they lost the ball the game was intense the goalkeeper catched the ball the other team controlled the ball mixed topics including sports varioussentences the president did not comment i lost the keys the team won the game sara has two kids the ball went off the court they had the ball for the whole game the show is over the features of a sentence are simply the words it contains def featuressentence words sentence lower split return dict containss w true for w in words we use the sports sentences as positive examples the mixed ones ad unlabeled examples positivefeaturesets mapfeatures sportssentences unlabeledfeaturesets mapfeatures varioussentences classifier positivenaivebayesclassifier trainpositivefeaturesets unlabeledfeaturesets is the following sentence about sports classifier classifyfeatures the cat is on the table false what about this one classifier classifyfeatures my team lost the game true positive naive bayes classifier param positivefeaturesets an iterable of featuresets that are known as positive examples i e their label is true param unlabeledfeaturesets an iterable of featuresets whose label is unknown param positiveprobprior a prior estimate of the probability of the label true default 0 5 count up how many times each feature value occurred in positive examples count up how many times each feature value occurred in unlabeled examples if a feature didn t have a value given for an instance then we assume that it gets the implicit value none create the plabel distribution create the pfvallabel fname distribution todo we need to add some kind of smoothing here instead of setting negative probabilities to zero and normalizing demo natural language toolkit positive naive bayes classifier c 2012 nltk project alessandro presta alessandro presta gmail com url https www nltk org for license information see license txt a variant of the naive bayes classifier that performs binary classification with partially labeled training sets in other words assume we want to build a classifier that assigns each example to one of two complementary classes e g male names and female names if we have a training set with labeled examples for both classes we can use a standard naive bayes classifier however consider the case when we only have labeled examples for one of the classes and other unlabeled examples then assuming a prior distribution on the two labels we can use the unlabeled set to estimate the frequencies of the various features let the two possible labels be 1 and 0 and let s say we only have examples labeled 1 and unlabeled examples we are also given an estimate of p 1 we compute p feature 1 exactly as in the standard case to compute p feature 0 we first estimate p feature from the unlabeled set we are assuming that the unlabeled examples are drawn according to the given prior distribution and then express the conditional probability as p feature p feature 1 p 1 p feature 0 p 0 example from nltk classify import positivenaivebayesclassifier some sentences about sports sports_sentences the team dominated the game they lost the ball the game was intense the goalkeeper catched the ball the other team controlled the ball mixed topics including sports various_sentences the president did not comment i lost the keys the team won the game sara has two kids the ball went off the court they had the ball for the whole game the show is over the features of a sentence are simply the words it contains def features sentence words sentence lower split return dict contains s w true for w in words we use the sports sentences as positive examples the mixed ones ad unlabeled examples positive_featuresets map features sports_sentences unlabeled_featuresets map features various_sentences classifier positivenaivebayesclassifier train positive_featuresets unlabeled_featuresets is the following sentence about sports classifier classify features the cat is on the table false what about this one classifier classify features my team lost the game true positive naive bayes classifier param positive_featuresets an iterable of featuresets that are known as positive examples i e their label is true param unlabeled_featuresets an iterable of featuresets whose label is unknown param positive_prob_prior a prior estimate of the probability of the label true default 0 5 count up how many times each feature value occurred in positive examples count up how many times each feature value occurred in unlabeled examples if a feature didn t have a value given for an instance then we assume that it gets the implicit value none create the p label distribution create the p fval label fname distribution todo we need to add some kind of smoothing here instead of setting negative probabilities to zero and normalizing demo
from collections import defaultdict from nltk.classify.naivebayes import NaiveBayesClassifier from nltk.probability import DictionaryProbDist, ELEProbDist, FreqDist class PositiveNaiveBayesClassifier(NaiveBayesClassifier): @staticmethod def train( positive_featuresets, unlabeled_featuresets, positive_prob_prior=0.5, estimator=ELEProbDist, ): positive_feature_freqdist = defaultdict(FreqDist) unlabeled_feature_freqdist = defaultdict(FreqDist) feature_values = defaultdict(set) fnames = set() num_positive_examples = 0 for featureset in positive_featuresets: for fname, fval in featureset.items(): positive_feature_freqdist[fname][fval] += 1 feature_values[fname].add(fval) fnames.add(fname) num_positive_examples += 1 num_unlabeled_examples = 0 for featureset in unlabeled_featuresets: for fname, fval in featureset.items(): unlabeled_feature_freqdist[fname][fval] += 1 feature_values[fname].add(fval) fnames.add(fname) num_unlabeled_examples += 1 for fname in fnames: count = positive_feature_freqdist[fname].N() positive_feature_freqdist[fname][None] += num_positive_examples - count feature_values[fname].add(None) for fname in fnames: count = unlabeled_feature_freqdist[fname].N() unlabeled_feature_freqdist[fname][None] += num_unlabeled_examples - count feature_values[fname].add(None) negative_prob_prior = 1.0 - positive_prob_prior label_probdist = DictionaryProbDist( {True: positive_prob_prior, False: negative_prob_prior} ) feature_probdist = {} for fname, freqdist in positive_feature_freqdist.items(): probdist = estimator(freqdist, bins=len(feature_values[fname])) feature_probdist[True, fname] = probdist for fname, freqdist in unlabeled_feature_freqdist.items(): global_probdist = estimator(freqdist, bins=len(feature_values[fname])) negative_feature_probs = {} for fval in feature_values[fname]: prob = ( global_probdist.prob(fval) - positive_prob_prior * feature_probdist[True, fname].prob(fval) ) / negative_prob_prior negative_feature_probs[fval] = max(prob, 0.0) feature_probdist[False, fname] = DictionaryProbDist( negative_feature_probs, normalize=True ) return PositiveNaiveBayesClassifier(label_probdist, feature_probdist) def demo(): from nltk.classify.util import partial_names_demo classifier = partial_names_demo(PositiveNaiveBayesClassifier.train) classifier.show_most_informative_features()
natural language toolkit rte classifier c 20012023 nltk project ewan klein ewaninf ed ac uk url https www nltk org for license information see license txt simple classifier for rte corpus it calculates the overlap in words and named entities between text and hypothesis and also whether there are words named entities in the hypothesis which fail to occur in the text since this is an indicator that the hypothesis is more informative than i e not entailed by the text to do better named entity classification to do add lemmatization this builds a bag of words for both the text and the hypothesis after throwing away some stopwords then calculates overlap and difference param rtepair a rtepair from which features should be extracted param stop if true stopwords are thrown away type stop bool try to tokenize so that abbreviations monetary amounts email addresses urls are single tokens get the set of word types for text and hypothesis compute the overlap between text and hypothesis param toktype distinguish named entities from ordinary words type toktype ne or word compute the extraneous material in the hypothesis param toktype distinguish named entities from ordinary words type toktype ne or word this just assumes that words in all caps or titles are named entities type token str use morphy from wordnet to find the base form of verbs train the classifier natural language toolkit rte classifier c 2001 2023 nltk project ewan klein ewan inf ed ac uk url https www nltk org for license information see license txt simple classifier for rte corpus it calculates the overlap in words and named entities between text and hypothesis and also whether there are words named entities in the hypothesis which fail to occur in the text since this is an indicator that the hypothesis is more informative than i e not entailed by the text to do better named entity classification to do add lemmatization this builds a bag of words for both the text and the hypothesis after throwing away some stopwords then calculates overlap and difference param rtepair a rtepair from which features should be extracted param stop if true stopwords are thrown away type stop bool try to tokenize so that abbreviations monetary amounts email addresses urls are single tokens get the set of word types for text and hypothesis compute the overlap between text and hypothesis param toktype distinguish named entities from ordinary words type toktype ne or word compute the extraneous material in the hypothesis param toktype distinguish named entities from ordinary words type toktype ne or word this just assumes that words in all caps or titles are named entities type token str use morphy from wordnet to find the base form of verbs train the classifier megam based algorithms use default gis iis maxent algorithm
from nltk.classify.maxent import MaxentClassifier from nltk.classify.util import accuracy from nltk.tokenize import RegexpTokenizer class RTEFeatureExtractor: def __init__(self, rtepair, stop=True, use_lemmatize=False): self.stop = stop self.stopwords = { "a", "the", "it", "they", "of", "in", "to", "is", "have", "are", "were", "and", "very", ".", ",", } self.negwords = {"no", "not", "never", "failed", "rejected", "denied"} tokenizer = RegexpTokenizer(r"[\w.@:/]+|\w+|\$[\d.]+") self.text_tokens = tokenizer.tokenize(rtepair.text) self.hyp_tokens = tokenizer.tokenize(rtepair.hyp) self.text_words = set(self.text_tokens) self.hyp_words = set(self.hyp_tokens) if use_lemmatize: self.text_words = {self._lemmatize(token) for token in self.text_tokens} self.hyp_words = {self._lemmatize(token) for token in self.hyp_tokens} if self.stop: self.text_words = self.text_words - self.stopwords self.hyp_words = self.hyp_words - self.stopwords self._overlap = self.hyp_words & self.text_words self._hyp_extra = self.hyp_words - self.text_words self._txt_extra = self.text_words - self.hyp_words def overlap(self, toktype, debug=False): ne_overlap = {token for token in self._overlap if self._ne(token)} if toktype == "ne": if debug: print("ne overlap", ne_overlap) return ne_overlap elif toktype == "word": if debug: print("word overlap", self._overlap - ne_overlap) return self._overlap - ne_overlap else: raise ValueError("Type not recognized:'%s'" % toktype) def hyp_extra(self, toktype, debug=True): ne_extra = {token for token in self._hyp_extra if self._ne(token)} if toktype == "ne": return ne_extra elif toktype == "word": return self._hyp_extra - ne_extra else: raise ValueError("Type not recognized: '%s'" % toktype) @staticmethod def _ne(token): if token.istitle() or token.isupper(): return True return False @staticmethod def _lemmatize(word): from nltk.corpus import wordnet as wn lemma = wn.morphy(word, pos=wn.VERB) if lemma is not None: return lemma return word def rte_features(rtepair): extractor = RTEFeatureExtractor(rtepair) features = {} features["alwayson"] = True features["word_overlap"] = len(extractor.overlap("word")) features["word_hyp_extra"] = len(extractor.hyp_extra("word")) features["ne_overlap"] = len(extractor.overlap("ne")) features["ne_hyp_extra"] = len(extractor.hyp_extra("ne")) features["neg_txt"] = len(extractor.negwords & extractor.text_words) features["neg_hyp"] = len(extractor.negwords & extractor.hyp_words) return features def rte_featurize(rte_pairs): return [(rte_features(pair), pair.value) for pair in rte_pairs] def rte_classifier(algorithm, sample_N=None): from nltk.corpus import rte as rte_corpus train_set = rte_corpus.pairs(["rte1_dev.xml", "rte2_dev.xml", "rte3_dev.xml"]) test_set = rte_corpus.pairs(["rte1_test.xml", "rte2_test.xml", "rte3_test.xml"]) if sample_N is not None: train_set = train_set[:sample_N] test_set = test_set[:sample_N] featurized_train_set = rte_featurize(train_set) featurized_test_set = rte_featurize(test_set) print("Training classifier...") if algorithm in ["megam"]: clf = MaxentClassifier.train(featurized_train_set, algorithm) elif algorithm in ["GIS", "IIS"]: clf = MaxentClassifier.train(featurized_train_set, algorithm) else: err_msg = str( "RTEClassifier only supports these algorithms:\n " "'megam', 'GIS', 'IIS'.\n" ) raise Exception(err_msg) print("Testing classifier...") acc = accuracy(clf, featurized_test_set) print("Accuracy: %6.4f" % acc) return clf
natural language toolkit interface to scikitlearn classifiers lars buitinck l j buitinckuva nl url https www nltk org for license information see license txt scikitlearn https scikitlearn org is a machine learning library for python it supports many classification algorithms including svms naive bayes logistic regression maxent and decision trees this package implements a wrapper around scikitlearn classifiers to use this wrapper construct a scikitlearn estimator object then use that to construct a sklearnclassifier e g to wrap a linear svm with default settings from sklearn svm import linearsvc from nltk classify scikitlearn import sklearnclassifier classif sklearnclassifierlinearsvc a scikitlearn classifier may include preprocessing steps when it s wrapped in a pipeline object the following constructs and wraps a naive bayes text classifier with tfidf weighting and chisquare feature selection to get the best 1000 features from sklearn featureextraction text import tfidftransformer from sklearn featureselection import selectkbest chi2 from sklearn naivebayes import multinomialnb from sklearn pipeline import pipeline pipeline pipeline tfidf tfidftransformer chi2 selectkbestchi2 k1000 nb multinomialnb classif sklearnclassifierpipeline wrapper for scikitlearn classifiers def initself estimator dtypefloat sparsetrue self clf estimator self encoder labelencoder self vectorizer dictvectorizerdtypedtype sparsesparse def reprself return sklearnclassifierr self clf def classifymanyself featuresets x self vectorizer transformfeaturesets classes self encoder classes return classesi for i in self clf predictx def probclassifymanyself featuresets x self vectorizer transformfeaturesets yprobalist self clf predictprobax return self makeprobdistyproba for yproba in yprobalist def labelsself return listself encoder classes def trainself labeledfeaturesets x y listziplabeledfeaturesets x self vectorizer fittransformx y self encoder fittransformy self clf fitx y return self def makeprobdistself yproba classes self encoder classes return dictionaryprobdistclassesi p for i p in enumerateyproba if name main from sklearn linearmodel import logisticregression from sklearn naivebayes import bernoullinb from nltk classify util import namesdemo namesdemofeatures bernoulli naive bayes is designed for binary classification we set the binarize option to false since we know we re passing boolean features printscikitlearn naive bayes namesdemo sklearnclassifierbernoullinbbinarizefalse train featuresnamesdemofeatures the c parameter on logistic regression maxent controls regularization the higher it s set the less regularized the classifier is printnnscikitlearn logistic regression namesdemo sklearnclassifierlogisticregressionc1000 train featuresnamesdemofeatures natural language toolkit interface to scikit learn classifiers lars buitinck l j buitinck uva nl url https www nltk org for license information see license txt scikit learn https scikit learn org is a machine learning library for python it supports many classification algorithms including svms naive bayes logistic regression maxent and decision trees this package implements a wrapper around scikit learn classifiers to use this wrapper construct a scikit learn estimator object then use that to construct a sklearnclassifier e g to wrap a linear svm with default settings from sklearn svm import linearsvc from nltk classify scikitlearn import sklearnclassifier classif sklearnclassifier linearsvc a scikit learn classifier may include preprocessing steps when it s wrapped in a pipeline object the following constructs and wraps a naive bayes text classifier with tf idf weighting and chi square feature selection to get the best 1000 features from sklearn feature_extraction text import tfidftransformer from sklearn feature_selection import selectkbest chi2 from sklearn naive_bayes import multinomialnb from sklearn pipeline import pipeline pipeline pipeline tfidf tfidftransformer chi2 selectkbest chi2 k 1000 nb multinomialnb classif sklearnclassifier pipeline wrapper for scikit learn classifiers param estimator scikit learn classifier object param dtype data type used when building feature array scikit learn estimators work exclusively on numeric data the default value should be fine for almost all situations param sparse whether to use sparse matrices internally the estimator must support these not all scikit learn classifiers do see their respective documentation and look for sparse matrix the default value is true since most nlp problems involve sparse feature sets setting this to false may take a great amount of memory type sparse boolean classify a batch of samples param featuresets an iterable over featuresets each a dict mapping strings to either numbers booleans or strings return the predicted class label for each input sample rtype list compute per class probabilities for a batch of samples param featuresets an iterable over featuresets each a dict mapping strings to either numbers booleans or strings rtype list of probdisti the class labels used by this classifier rtype list train fit the scikit learn estimator param labeled_featuresets a list of featureset label where each featureset is a dict mapping strings to either numbers booleans or strings bernoulli naive bayes is designed for binary classification we set the binarize option to false since we know we re passing boolean features the c parameter on logistic regression maxent controls regularization the higher it s set the less regularized the classifier is
from nltk.classify.api import ClassifierI from nltk.probability import DictionaryProbDist try: from sklearn.feature_extraction import DictVectorizer from sklearn.preprocessing import LabelEncoder except ImportError: pass __all__ = ["SklearnClassifier"] class SklearnClassifier(ClassifierI): def __init__(self, estimator, dtype=float, sparse=True): self._clf = estimator self._encoder = LabelEncoder() self._vectorizer = DictVectorizer(dtype=dtype, sparse=sparse) def __repr__(self): return "<SklearnClassifier(%r)>" % self._clf def classify_many(self, featuresets): X = self._vectorizer.transform(featuresets) classes = self._encoder.classes_ return [classes[i] for i in self._clf.predict(X)] def prob_classify_many(self, featuresets): X = self._vectorizer.transform(featuresets) y_proba_list = self._clf.predict_proba(X) return [self._make_probdist(y_proba) for y_proba in y_proba_list] def labels(self): return list(self._encoder.classes_) def train(self, labeled_featuresets): X, y = list(zip(*labeled_featuresets)) X = self._vectorizer.fit_transform(X) y = self._encoder.fit_transform(y) self._clf.fit(X, y) return self def _make_probdist(self, y_proba): classes = self._encoder.classes_ return DictionaryProbDist({classes[i]: p for i, p in enumerate(y_proba)}) if __name__ == "__main__": from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import BernoulliNB from nltk.classify.util import names_demo, names_demo_features print("scikit-learn Naive Bayes:") names_demo( SklearnClassifier(BernoulliNB(binarize=False)).train, features=names_demo_features, ) print("\n\nscikit-learn logistic regression:") names_demo( SklearnClassifier(LogisticRegression(C=1000)).train, features=names_demo_features, )
natural language toolkit senna interface c 20012023 nltk project rami alrfou ralrfoucs stonybrook edu url https www nltk org for license information see license txt a general interface to the senna pipeline that supports any of the operations specified in supportedoperations applying multiple operations at once has the speed advantage for example senna will automatically determine pos tags if you are extracting named entities applying both of the operations will cost only the time of extracting the named entities the senna pipeline has a fixed maximum size of the sentences that it can read by default it is 1024 tokensentence if you have larger sentences changing the maxsentencesize value in sennamain c should be considered and your system specific binary should be rebuilt otherwise this could introduce misalignment errors the input is path to the directory that contains senna executables if the path is incorrect senna will automatically search for executable file specified in senna environment variable list of the operations needed to be performed optionally the encoding of the input data default utf8 note unit tests for this module can be found in testunittestsenna py from nltk classify import senna pipeline senna usrsharesennav3 0 pos chk ner doctest skip sent dusseldorf is an international business center split token word token chk token ner token pos for token in pipeline tagsent doctest skip dusseldorf bnp bloc nnp is bvp o vbz an bnp o dt international inp o jj business inp o nn center inp o nn verifies the existence of the executable on the self path first sennabinaryfile1 self executableself path check for the system environment self path path joinenviron senna the function that determines the system specific binary that should be used in the pipeline in case the system is not known the default senna binary will be used a method that calculates the order of the columns that senna pipeline will output the tags into this depends on the operations being ordered applies the specified operations on a list of tokens applies the tag method over a list of sentences this method will return a list of dictionaries every dictionary will contain a word with its calculated annotationstags build the senna command to run the tagger serialize the actual sentences to a temporary string run the tagger and get the output check the return code output the tagged sentences natural language toolkit senna interface c 2001 2023 nltk project rami al rfou ralrfou cs stonybrook edu url https www nltk org for license information see license txt a general interface to the senna pipeline that supports any of the operations specified in supported_operations applying multiple operations at once has the speed advantage for example senna will automatically determine pos tags if you are extracting named entities applying both of the operations will cost only the time of extracting the named entities the senna pipeline has a fixed maximum size of the sentences that it can read by default it is 1024 token sentence if you have larger sentences changing the max_sentence_size value in senna_main c should be considered and your system specific binary should be rebuilt otherwise this could introduce misalignment errors the input is path to the directory that contains senna executables if the path is incorrect senna will automatically search for executable file specified in senna environment variable list of the operations needed to be performed optionally the encoding of the input data default utf 8 note unit tests for this module can be found in test unit test_senna py from nltk classify import senna pipeline senna usr share senna v3 0 pos chk ner doctest skip sent dusseldorf is an international business center split token word token chk token ner token pos for token in pipeline tag sent doctest skip dusseldorf b np b loc nnp is b vp o vbz an b np o dt international i np o jj business i np o nn center i np o nn verifies the existence of the executable on the self _path first senna_binary_file_1 self executable self _path check for the system environment self _path path join environ senna the function that determines the system specific binary that should be used in the pipeline in case the system is not known the default senna binary will be used a method that calculates the order of the columns that senna pipeline will output the tags into this depends on the operations being ordered applies the specified operation s on a list of tokens applies the tag method over a list of sentences this method will return a list of dictionaries every dictionary will contain a word with its calculated annotations tags build the senna command to run the tagger serialize the actual sentences to a temporary string run the tagger and get the output check the return code output the tagged sentences
from os import environ, path, sep from platform import architecture, system from subprocess import PIPE, Popen from nltk.tag.api import TaggerI class Senna(TaggerI): SUPPORTED_OPERATIONS = ["pos", "chk", "ner"] def __init__(self, senna_path, operations, encoding="utf-8"): self._encoding = encoding self._path = path.normpath(senna_path) + sep exe_file_1 = self.executable(self._path) if not path.isfile(exe_file_1): if "SENNA" in environ: self._path = path.normpath(environ["SENNA"]) + sep exe_file_2 = self.executable(self._path) if not path.isfile(exe_file_2): raise LookupError( "Senna executable expected at %s or %s but not found" % (exe_file_1, exe_file_2) ) self.operations = operations def executable(self, base_path): os_name = system() if os_name == "Linux": bits = architecture()[0] if bits == "64bit": return path.join(base_path, "senna-linux64") return path.join(base_path, "senna-linux32") if os_name == "Windows": return path.join(base_path, "senna-win32.exe") if os_name == "Darwin": return path.join(base_path, "senna-osx") return path.join(base_path, "senna") def _map(self): _map = {} i = 1 for operation in Senna.SUPPORTED_OPERATIONS: if operation in self.operations: _map[operation] = i i += 1 return _map def tag(self, tokens): return self.tag_sents([tokens])[0] def tag_sents(self, sentences): encoding = self._encoding if not path.isfile(self.executable(self._path)): raise LookupError( "Senna executable expected at %s but not found" % self.executable(self._path) ) _senna_cmd = [ self.executable(self._path), "-path", self._path, "-usrtokens", "-iobtags", ] _senna_cmd.extend(["-" + op for op in self.operations]) _input = "\n".join(" ".join(x) for x in sentences) + "\n" if isinstance(_input, str) and encoding: _input = _input.encode(encoding) p = Popen(_senna_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) (stdout, stderr) = p.communicate(input=_input) senna_output = stdout if p.returncode != 0: raise RuntimeError("Senna command failed! Details: %s" % stderr) if encoding: senna_output = stdout.decode(encoding) map_ = self._map() tagged_sentences = [[]] sentence_index = 0 token_index = 0 for tagged_word in senna_output.strip().split("\n"): if not tagged_word: tagged_sentences.append([]) sentence_index += 1 token_index = 0 continue tags = tagged_word.split("\t") result = {} for tag in map_: result[tag] = tags[map_[tag]].strip() try: result["word"] = sentences[sentence_index][token_index] except IndexError as e: raise IndexError( "Misalignment error occurred at sentence number %d. Possible reason" " is that the sentence size exceeded the maximum size. Check the " "documentation of Senna class for more information." % sentence_index ) from e tagged_sentences[-1].append(result) token_index += 1 return tagged_sentences
natural language toolkit svmbased classifier c 20012023 nltk project leon derczynski leondcs shef ac uk url https www nltk org for license information see license txt nltk classify svm was deprecated for classification based on support vector machines svms use nltk classify scikitlearn or scikitlearn https scikitlearn org directly natural language toolkit svm based classifier c 2001 2023 nltk project leon derczynski leon dcs shef ac uk url https www nltk org for license information see license txt nltk classify svm was deprecated for classification based on support vector machines svms use nltk classify scikitlearn or scikit learn https scikit learn org _ directly
class SvmClassifier: def __init__(self, *args, **kwargs): raise NotImplementedError(__doc__)
natural language toolkit interface to tadm classifier c 20012023 nltk project joseph frazee jfrazeemail utexas edu url https www nltk org for license information see license txt generate an input file for tadm based on the given corpus of classified tokens type traintoks listtupledict str param traintoks training data represented as a list of pairs the first member of which is a feature dictionary and the second of which is a classification label type encoding tadmeventmaxentfeatureencoding param encoding a feature encoding used to convert featuresets into feature vectors type stream stream param stream the stream to which the tadm input file should be written see the following for a file format description https sf netforumforum php threadid1391502forumid473054 https sf netforumforum php threadid1675097forumid473054 given the stdout output generated by tadm when training a model return a numpy array containing the corresponding weight vector call the tadm binary with the given arguments call tadm via a subprocess check the return code natural language toolkit interface to tadm classifier c 2001 2023 nltk project joseph frazee jfrazee mail utexas edu url https www nltk org for license information see license txt generate an input file for tadm based on the given corpus of classified tokens type train_toks list tuple dict str param train_toks training data represented as a list of pairs the first member of which is a feature dictionary and the second of which is a classification label type encoding tadmeventmaxentfeatureencoding param encoding a feature encoding used to convert featuresets into feature vectors type stream stream param stream the stream to which the tadm input file should be written see the following for a file format description https sf net forum forum php thread_id 1391502 forum_id 473054 https sf net forum forum php thread_id 1675097 forum_id 473054 given the stdout output generated by tadm when training a model return a numpy array containing the corresponding weight vector call the tadm binary with the given arguments call tadm via a subprocess check the return code
import subprocess import sys from nltk.internals import find_binary try: import numpy except ImportError: pass _tadm_bin = None def config_tadm(bin=None): global _tadm_bin _tadm_bin = find_binary( "tadm", bin, env_vars=["TADM"], binary_names=["tadm"], url="http://tadm.sf.net" ) def write_tadm_file(train_toks, encoding, stream): labels = encoding.labels() for featureset, label in train_toks: length_line = "%d\n" % len(labels) stream.write(length_line) for known_label in labels: v = encoding.encode(featureset, known_label) line = "%d %d %s\n" % ( int(label == known_label), len(v), " ".join("%d %d" % u for u in v), ) stream.write(line) def parse_tadm_weights(paramfile): weights = [] for line in paramfile: weights.append(float(line.strip())) return numpy.array(weights, "d") def call_tadm(args): if isinstance(args, str): raise TypeError("args should be a list of strings") if _tadm_bin is None: config_tadm() cmd = [_tadm_bin] + args p = subprocess.Popen(cmd, stdout=sys.stdout) (stdout, stderr) = p.communicate() if p.returncode != 0: print() print(stderr) raise OSError("tadm command failed!") def names_demo(): from nltk.classify.maxent import TadmMaxentClassifier from nltk.classify.util import names_demo classifier = names_demo(TadmMaxentClassifier.train) def encoding_demo(): import sys from nltk.classify.maxent import TadmEventMaxentFeatureEncoding tokens = [ ({"f0": 1, "f1": 1, "f3": 1}, "A"), ({"f0": 1, "f2": 1, "f4": 1}, "B"), ({"f0": 2, "f2": 1, "f3": 1, "f4": 1}, "A"), ] encoding = TadmEventMaxentFeatureEncoding.train(tokens) write_tadm_file(tokens, encoding, sys.stdout) print() for i in range(encoding.length()): print("%s --> %d" % (encoding.describe(i), i)) print() if __name__ == "__main__": encoding_demo() names_demo()
natural language toolkit classifier utility functions c 20012023 nltk project edward loper edlopergmail com steven bird stevenbird1gmail com minor additions url https www nltk org for license information see license txt utility functions and classes for classifiers from nltk util import deprecated helper functions alternative name possibility mapfeaturefunc alternative name possibility detectfeatures alternative name possibility mapfeaturedetect or just have users use lazymap directly use the lazymap class to construct a lazy listlike object that is analogous to mapfeaturefunc toks in particular if labeledfalse then the returned listlike object s values are equal to featurefunctok for tok in toks if labeledtrue then the returned listlike object s values are equal to featurefunctok label for tok label in toks the primary purpose of this function is to avoid the memory overhead involved in storing all the featuresets for every token in a corpus instead these featuresets are constructed lazily asneeded the reduction in memory overhead can be especially significant when the underlying list of tokens is itself lazy as is the case with many corpus readers param featurefunc the function that will be applied to each token it should return a featureset i e a dict mapping feature names to feature values param toks the list of tokens to which featurefunc should be applied if labeledtrue then the list elements will be passed directly to featurefunc if labeledfalse then the list elements should be tuples tok label and tok will be passed to featurefunc param labeled if true then toks contains labeled tokens i e tuples of the form tok label default autodetect based on types return a list of all labels that are attested in the given list of tokens rtype list of immutable param tokens the list of classified tokens from which to extract labels a classified token has the form token label type tokens list a helper class that implements cutoff checks based on number of iterations and log likelihood accuracy cutoffs are also implemented but they re almost never a good idea to use demos construct a list of classified names using the names corpus randomly split the names into a test train set train up a classifier run the classifier on the test data for classifiers that can find probabilities show the log likelihood and some sample probability distributions return the classifier create a list of male names to be used as positivelabeled examples for training create a list of male and female names to be used as unlabeled examples create a test set with correctlylabeled male and female names train up a classifier run the classifier on the test data for classifiers that can find probabilities show the log likelihood and some sample probability distributions return the classifier get the instances randomly split the names into a test train set train up a classifier run the classifier on the test data for classifiers that can find probabilities show the log likelihood and some sample probability distributions return the classifier checks whether the megam binary is configured natural language toolkit classifier utility functions c 2001 2023 nltk project edward loper edloper gmail com steven bird stevenbird1 gmail com minor additions url https www nltk org for license information see license txt utility functions and classes for classifiers from nltk util import deprecated for accuracy log_likelihood helper functions alternative name possibility map_featurefunc alternative name possibility detect_features alternative name possibility map_featuredetect or just have users use lazymap directly use the lazymap class to construct a lazy list like object that is analogous to map feature_func toks in particular if labeled false then the returned list like object s values are equal to feature_func tok for tok in toks if labeled true then the returned list like object s values are equal to feature_func tok label for tok label in toks the primary purpose of this function is to avoid the memory overhead involved in storing all the featuresets for every token in a corpus instead these featuresets are constructed lazily as needed the reduction in memory overhead can be especially significant when the underlying list of tokens is itself lazy as is the case with many corpus readers param feature_func the function that will be applied to each token it should return a featureset i e a dict mapping feature names to feature values param toks the list of tokens to which feature_func should be applied if labeled true then the list elements will be passed directly to feature_func if labeled false then the list elements should be tuples tok label and tok will be passed to feature_func param labeled if true then toks contains labeled tokens i e tuples of the form tok label default auto detect based on types return a list of all labels that are attested in the given list of tokens rtype list of immutable param tokens the list of classified tokens from which to extract labels a classified token has the form token label type tokens list a helper class that implements cutoff checks based on number of iterations and log likelihood accuracy cutoffs are also implemented but they re almost never a good idea to use iteration cutoff log likelihood cutoff log likelihood delta cutoff log likelihood cutoff log likelihood delta cutoff no cutoff reached demos construct a list of classified names using the names corpus randomly split the names into a test train set train up a classifier run the classifier on the test data for classifiers that can find probabilities show the log likelihood and some sample probability distributions return the classifier create a list of male names to be used as positive labeled examples for training create a list of male and female names to be used as unlabeled examples create a test set with correctly labeled male and female names train up a classifier run the classifier on the test data for classifiers that can find probabilities show the log likelihood and some sample probability distributions return the classifier get the instances randomly split the names into a test train set train up a classifier run the classifier on the test data for classifiers that can find probabilities show the log likelihood and some sample probability distributions return the classifier checks whether the megam binary is configured
import math import nltk.classify.util from nltk.util import LazyMap def apply_features(feature_func, toks, labeled=None): if labeled is None: labeled = toks and isinstance(toks[0], (tuple, list)) if labeled: def lazy_func(labeled_token): return (feature_func(labeled_token[0]), labeled_token[1]) return LazyMap(lazy_func, toks) else: return LazyMap(feature_func, toks) def attested_labels(tokens): return tuple({label for (tok, label) in tokens}) def log_likelihood(classifier, gold): results = classifier.prob_classify_many([fs for (fs, l) in gold]) ll = [pdist.prob(l) for ((fs, l), pdist) in zip(gold, results)] return math.log(sum(ll) / len(ll)) def accuracy(classifier, gold): results = classifier.classify_many([fs for (fs, l) in gold]) correct = [l == r for ((fs, l), r) in zip(gold, results)] if correct: return sum(correct) / len(correct) else: return 0 class CutoffChecker: def __init__(self, cutoffs): self.cutoffs = cutoffs.copy() if "min_ll" in cutoffs: cutoffs["min_ll"] = -abs(cutoffs["min_ll"]) if "min_lldelta" in cutoffs: cutoffs["min_lldelta"] = abs(cutoffs["min_lldelta"]) self.ll = None self.acc = None self.iter = 1 def check(self, classifier, train_toks): cutoffs = self.cutoffs self.iter += 1 if "max_iter" in cutoffs and self.iter >= cutoffs["max_iter"]: return True new_ll = nltk.classify.util.log_likelihood(classifier, train_toks) if math.isnan(new_ll): return True if "min_ll" in cutoffs or "min_lldelta" in cutoffs: if "min_ll" in cutoffs and new_ll >= cutoffs["min_ll"]: return True if ( "min_lldelta" in cutoffs and self.ll and ((new_ll - self.ll) <= abs(cutoffs["min_lldelta"])) ): return True self.ll = new_ll if "max_acc" in cutoffs or "min_accdelta" in cutoffs: new_acc = nltk.classify.util.log_likelihood(classifier, train_toks) if "max_acc" in cutoffs and new_acc >= cutoffs["max_acc"]: return True if ( "min_accdelta" in cutoffs and self.acc and ((new_acc - self.acc) <= abs(cutoffs["min_accdelta"])) ): return True self.acc = new_acc return False def names_demo_features(name): features = {} features["alwayson"] = True features["startswith"] = name[0].lower() features["endswith"] = name[-1].lower() for letter in "abcdefghijklmnopqrstuvwxyz": features["count(%s)" % letter] = name.lower().count(letter) features["has(%s)" % letter] = letter in name.lower() return features def binary_names_demo_features(name): features = {} features["alwayson"] = True features["startswith(vowel)"] = name[0].lower() in "aeiouy" features["endswith(vowel)"] = name[-1].lower() in "aeiouy" for letter in "abcdefghijklmnopqrstuvwxyz": features["count(%s)" % letter] = name.lower().count(letter) features["has(%s)" % letter] = letter in name.lower() features["startswith(%s)" % letter] = letter == name[0].lower() features["endswith(%s)" % letter] = letter == name[-1].lower() return features def names_demo(trainer, features=names_demo_features): import random from nltk.corpus import names namelist = [(name, "male") for name in names.words("male.txt")] + [ (name, "female") for name in names.words("female.txt") ] random.seed(123456) random.shuffle(namelist) train = namelist[:5000] test = namelist[5000:5500] print("Training classifier...") classifier = trainer([(features(n), g) for (n, g) in train]) print("Testing classifier...") acc = accuracy(classifier, [(features(n), g) for (n, g) in test]) print("Accuracy: %6.4f" % acc) try: test_featuresets = [features(n) for (n, g) in test] pdists = classifier.prob_classify_many(test_featuresets) ll = [pdist.logprob(gold) for ((name, gold), pdist) in zip(test, pdists)] print("Avg. log likelihood: %6.4f" % (sum(ll) / len(test))) print() print("Unseen Names P(Male) P(Female)\n" + "-" * 40) for ((name, gender), pdist) in list(zip(test, pdists))[:5]: if gender == "male": fmt = " %-15s *%6.4f %6.4f" else: fmt = " %-15s %6.4f *%6.4f" print(fmt % (name, pdist.prob("male"), pdist.prob("female"))) except NotImplementedError: pass return classifier def partial_names_demo(trainer, features=names_demo_features): import random from nltk.corpus import names male_names = names.words("male.txt") female_names = names.words("female.txt") random.seed(654321) random.shuffle(male_names) random.shuffle(female_names) positive = map(features, male_names[:2000]) unlabeled = map(features, male_names[2000:2500] + female_names[:500]) test = [(name, True) for name in male_names[2500:2750]] + [ (name, False) for name in female_names[500:750] ] random.shuffle(test) print("Training classifier...") classifier = trainer(positive, unlabeled) print("Testing classifier...") acc = accuracy(classifier, [(features(n), m) for (n, m) in test]) print("Accuracy: %6.4f" % acc) try: test_featuresets = [features(n) for (n, m) in test] pdists = classifier.prob_classify_many(test_featuresets) ll = [pdist.logprob(gold) for ((name, gold), pdist) in zip(test, pdists)] print("Avg. log likelihood: %6.4f" % (sum(ll) / len(test))) print() print("Unseen Names P(Male) P(Female)\n" + "-" * 40) for ((name, is_male), pdist) in zip(test, pdists)[:5]: if is_male == True: fmt = " %-15s *%6.4f %6.4f" else: fmt = " %-15s %6.4f *%6.4f" print(fmt % (name, pdist.prob(True), pdist.prob(False))) except NotImplementedError: pass return classifier _inst_cache = {} def wsd_demo(trainer, word, features, n=1000): import random from nltk.corpus import senseval print("Reading data...") global _inst_cache if word not in _inst_cache: _inst_cache[word] = [(i, i.senses[0]) for i in senseval.instances(word)] instances = _inst_cache[word][:] if n > len(instances): n = len(instances) senses = list({l for (i, l) in instances}) print(" Senses: " + " ".join(senses)) print("Splitting into test & train...") random.seed(123456) random.shuffle(instances) train = instances[: int(0.8 * n)] test = instances[int(0.8 * n) : n] print("Training classifier...") classifier = trainer([(features(i), l) for (i, l) in train]) print("Testing classifier...") acc = accuracy(classifier, [(features(i), l) for (i, l) in test]) print("Accuracy: %6.4f" % acc) try: test_featuresets = [features(i) for (i, n) in test] pdists = classifier.prob_classify_many(test_featuresets) ll = [pdist.logprob(gold) for ((name, gold), pdist) in zip(test, pdists)] print("Avg. log likelihood: %6.4f" % (sum(ll) / len(test))) except NotImplementedError: pass return classifier def check_megam_config(): try: _megam_bin except NameError as e: err_msg = str( "Please configure your megam binary first, e.g.\n" ">>> nltk.config_megam('/usr/bin/local/megam')" ) raise NameError(err_msg) from e
natural language toolkit nltk commandline interface c 20012023 nltk project url https www nltk org for license information see license txt this command tokenizes text stream using nltk wordtokenize with click gettextstreamstdin encodingencoding as fin with click gettextstreamstdout encodingencoding as fout if it s single process joblib parallelization is slower so just process line by line normally if processes 1 for line in tqdmfin readlines printdelimiter joinwordtokenizeline endn filefout else for outline in parallelizepreprocess wordtokenize fin readlines processes progressbartrue printdelimiter joinoutline endn filefout natural language toolkit nltk command line interface c 2001 2023 nltk project url https www nltk org for license information see license txt this command tokenizes text stream using nltk word_tokenize if it s single process joblib parallelization is slower so just process line by line normally
import click from tqdm import tqdm from nltk import word_tokenize from nltk.util import parallelize_preprocess CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"]) @click.group(context_settings=CONTEXT_SETTINGS) @click.version_option() def cli(): pass @cli.command("tokenize") @click.option( "--language", "-l", default="en", help="The language for the Punkt sentence tokenization.", ) @click.option( "--preserve-line", "-l", default=True, is_flag=True, help="An option to keep the preserve the sentence and not sentence tokenize it.", ) @click.option("--processes", "-j", default=1, help="No. of processes.") @click.option("--encoding", "-e", default="utf8", help="Specify encoding of file.") @click.option( "--delimiter", "-d", default=" ", help="Specify delimiter to join the tokens." ) def tokenize_file(language, preserve_line, processes, encoding, delimiter): with click.get_text_stream("stdin", encoding=encoding) as fin: with click.get_text_stream("stdout", encoding=encoding) as fout: if processes == 1: for line in tqdm(fin.readlines()): print(delimiter.join(word_tokenize(line)), end="\n", file=fout) else: for outline in parallelize_preprocess( word_tokenize, fin.readlines(), processes, progress_bar=True ): print(delimiter.join(outline), end="\n", file=fout)
natural language toolkit clusterers c 20012023 nltk project trevor cohn tacohncs mu oz au url https www nltk org for license information see license txt this module contains a number of basic clustering algorithms clustering describes the task of discovering groups of similar items with a large collection it is also describe as unsupervised machine learning as the data from which it learns is unannotated with class information as is the case for supervised learning annotated data is difficult and expensive to obtain in the quantities required for the majority of supervised learning algorithms this problem the knowledge acquisition bottleneck is common to most natural language processing tasks thus fueling the need for quality unsupervised approaches this module contains a kmeans clusterer em clusterer and a group average agglomerative clusterer gaac all these clusterers involve finding good cluster groupings for a set of vectors in multidimensional space the kmeans clusterer starts with k arbitrary chosen means then allocates each vector to the cluster with the closest mean it then recalculates the means of each cluster as the centroid of the vectors in the cluster this process repeats until the cluster memberships stabilise this is a hillclimbing algorithm which may converge to a local maximum hence the clustering is often repeated with random initial means and the most commonly occurring output means are chosen the gaac clusterer starts with each of the n vectors as singleton clusters it then iteratively merges pairs of clusters which have the closest centroids this continues until there is only one cluster the order of merges gives rise to a dendrogram a tree with the earlier merges lower than later merges the membership of a given number of clusters c 1 c n can be found by cutting the dendrogram at depth c the gaussian em clusterer models the vectors as being produced by a mixture of k gaussian sources the parameters of these sources prior probability mean and covariance matrix are then found to maximise the likelihood of the given data this is done with the expectation maximisation algorithm it starts with k arbitrarily chosen means priors and covariance matrices it then calculates the membership probabilities for each vector in each of the clusters this is the e step the cluster parameters are then updated in the m step using the maximum likelihood estimate from the cluster membership probabilities this process continues until the likelihood of the data does not significantly increase they all extend the clusteri interface which defines common operations available with each clusterer these operations include cluster clusters a sequence of vectors classify assign a vector to a cluster classificationprobdist give the probability distribution over cluster memberships the current existing classifiers also extend cluster vectorspace an abstract class which allows for singular value decomposition svd and vector normalisation svd is used to reduce the dimensionality of the vector space in such a manner as to preserve as much of the variation as possible by reparameterising the axes in order of variability and discarding all bar the first d dimensions normalisation ensures that vectors fall in the unit hypersphere usage example see also demo from nltk import cluster from nltk cluster import euclideandistance from numpy import array vectors arrayf for f in 3 3 1 2 4 2 4 0 initialise the clusterer will also assign the vectors to clusters clusterer cluster kmeansclusterer2 euclideandistance clusterer clustervectors true classify a new vector printclusterer classifyarray3 3 note that the vectors must use numpy arraylike objects nltkcontrib unimelb tacohn sparsearrays may be used for efficiency when required natural language toolkit clusterers c 2001 2023 nltk project trevor cohn tacohn cs mu oz au url https www nltk org for license information see license txt this module contains a number of basic clustering algorithms clustering describes the task of discovering groups of similar items with a large collection it is also describe as unsupervised machine learning as the data from which it learns is unannotated with class information as is the case for supervised learning annotated data is difficult and expensive to obtain in the quantities required for the majority of supervised learning algorithms this problem the knowledge acquisition bottleneck is common to most natural language processing tasks thus fueling the need for quality unsupervised approaches this module contains a k means clusterer e m clusterer and a group average agglomerative clusterer gaac all these clusterers involve finding good cluster groupings for a set of vectors in multi dimensional space the k means clusterer starts with k arbitrary chosen means then allocates each vector to the cluster with the closest mean it then recalculates the means of each cluster as the centroid of the vectors in the cluster this process repeats until the cluster memberships stabilise this is a hill climbing algorithm which may converge to a local maximum hence the clustering is often repeated with random initial means and the most commonly occurring output means are chosen the gaac clusterer starts with each of the n vectors as singleton clusters it then iteratively merges pairs of clusters which have the closest centroids this continues until there is only one cluster the order of merges gives rise to a dendrogram a tree with the earlier merges lower than later merges the membership of a given number of clusters c 1 c n can be found by cutting the dendrogram at depth c the gaussian em clusterer models the vectors as being produced by a mixture of k gaussian sources the parameters of these sources prior probability mean and covariance matrix are then found to maximise the likelihood of the given data this is done with the expectation maximisation algorithm it starts with k arbitrarily chosen means priors and covariance matrices it then calculates the membership probabilities for each vector in each of the clusters this is the e step the cluster parameters are then updated in the m step using the maximum likelihood estimate from the cluster membership probabilities this process continues until the likelihood of the data does not significantly increase they all extend the clusteri interface which defines common operations available with each clusterer these operations include cluster clusters a sequence of vectors classify assign a vector to a cluster classification_probdist give the probability distribution over cluster memberships the current existing classifiers also extend cluster vectorspace an abstract class which allows for singular value decomposition svd and vector normalisation svd is used to reduce the dimensionality of the vector space in such a manner as to preserve as much of the variation as possible by reparameterising the axes in order of variability and discarding all bar the first d dimensions normalisation ensures that vectors fall in the unit hypersphere usage example see also demo from nltk import cluster from nltk cluster import euclidean_distance from numpy import array vectors array f for f in 3 3 1 2 4 2 4 0 initialise the clusterer will also assign the vectors to clusters clusterer cluster kmeansclusterer 2 euclidean_distance clusterer cluster vectors true classify a new vector print clusterer classify array 3 3 note that the vectors must use numpy array like objects nltk_contrib unimelb tacohn sparsearrays may be used for efficiency when required
from nltk.cluster.em import EMClusterer from nltk.cluster.gaac import GAAClusterer from nltk.cluster.kmeans import KMeansClusterer from nltk.cluster.util import ( Dendrogram, VectorSpaceClusterer, cosine_distance, euclidean_distance, )
natural language toolkit expectation maximization clusterer c 20012023 nltk project trevor cohn tacohncs mu oz au url https www nltk org for license information see license txt the gaussian em clusterer models the vectors as being produced by a mixture of k gaussian sources the parameters of these sources prior probability mean and covariance matrix are then found to maximise the likelihood of the given data this is done with the expectation maximisation algorithm it starts with k arbitrarily chosen means priors and covariance matrices it then calculates the membership probabilities for each vector in each of the clusters this is the e step the cluster parameters are then updated in the m step using the maximum likelihood estimate from the cluster membership probabilities this process continues until the likelihood of the data does not significantly increase creates an em clusterer with the given starting parameters convergence threshold and vector mangling parameters param initialmeans the means of the gaussian cluster centers type initialmeans seq of numpy array or seq of sparsearray param priors the prior probability for each cluster type priors numpy array or seq of float param covariancematrices the covariance matrix for each cluster type covariancematrices seq of numpy array param convthreshold maximum change in likelihood before deemed convergent type convthreshold int or float param bias variance bias used to ensure nonsingular covariance matrices type bias float param normalise should vectors be normalised to length 1 type normalise boolean param svddimensions number of dimensions to use in reducing vector dimensionsionality with svd type svddimensions int set the parameters to initial values do the e and m steps until the likelihood plateaus estep calculate hidden variables hi j mstep update parameters cvm p mean bias term to stop covariance matrix being singular calculate likelihood fixme may be broken check for convergence happens when the exponent is negative infinity i e b 0 i e the inverse of cvm is huge cvm is almost zero noninteractive demonstration of the clusterers with simple 2d data example from figure 14 10 page 519 manning and schutze classify a new vector show the classification probabilities natural language toolkit expectation maximization clusterer c 2001 2023 nltk project trevor cohn tacohn cs mu oz au url https www nltk org for license information see license txt the gaussian em clusterer models the vectors as being produced by a mixture of k gaussian sources the parameters of these sources prior probability mean and covariance matrix are then found to maximise the likelihood of the given data this is done with the expectation maximisation algorithm it starts with k arbitrarily chosen means priors and covariance matrices it then calculates the membership probabilities for each vector in each of the clusters this is the e step the cluster parameters are then updated in the m step using the maximum likelihood estimate from the cluster membership probabilities this process continues until the likelihood of the data does not significantly increase creates an em clusterer with the given starting parameters convergence threshold and vector mangling parameters param initial_means the means of the gaussian cluster centers type initial_means seq of numpy array or seq of sparsearray param priors the prior probability for each cluster type priors numpy array or seq of float param covariance_matrices the covariance matrix for each cluster type covariance_matrices seq of numpy array param conv_threshold maximum change in likelihood before deemed convergent type conv_threshold int or float param bias variance bias used to ensure non singular covariance matrices type bias float param normalise should vectors be normalised to length 1 type normalise boolean param svd_dimensions number of dimensions to use in reducing vector dimensionsionality with svd type svd_dimensions int set the parameters to initial values do the e and m steps until the likelihood plateaus e step calculate hidden variables h i j m step update parameters cvm p mean bias term to stop covariance matrix being singular calculate likelihood fixme may be broken check for convergence happens when the exponent is negative infinity i e b 0 i e the inverse of cvm is huge cvm is almost zero non interactive demonstration of the clusterers with simple 2 d data example from figure 14 10 page 519 manning and schutze classify a new vector show the classification probabilities
try: import numpy except ImportError: pass from nltk.cluster.util import VectorSpaceClusterer class EMClusterer(VectorSpaceClusterer): def __init__( self, initial_means, priors=None, covariance_matrices=None, conv_threshold=1e-6, bias=0.1, normalise=False, svd_dimensions=None, ): VectorSpaceClusterer.__init__(self, normalise, svd_dimensions) self._means = numpy.array(initial_means, numpy.float64) self._num_clusters = len(initial_means) self._conv_threshold = conv_threshold self._covariance_matrices = covariance_matrices self._priors = priors self._bias = bias def num_clusters(self): return self._num_clusters def cluster_vectorspace(self, vectors, trace=False): assert len(vectors) > 0 dimensions = len(vectors[0]) means = self._means priors = self._priors if not priors: priors = self._priors = ( numpy.ones(self._num_clusters, numpy.float64) / self._num_clusters ) covariances = self._covariance_matrices if not covariances: covariances = self._covariance_matrices = [ numpy.identity(dimensions, numpy.float64) for i in range(self._num_clusters) ] lastl = self._loglikelihood(vectors, priors, means, covariances) converged = False while not converged: if trace: print("iteration; loglikelihood", lastl) h = numpy.zeros((len(vectors), self._num_clusters), numpy.float64) for i in range(len(vectors)): for j in range(self._num_clusters): h[i, j] = priors[j] * self._gaussian( means[j], covariances[j], vectors[i] ) h[i, :] /= sum(h[i, :]) for j in range(self._num_clusters): covariance_before = covariances[j] new_covariance = numpy.zeros((dimensions, dimensions), numpy.float64) new_mean = numpy.zeros(dimensions, numpy.float64) sum_hj = 0.0 for i in range(len(vectors)): delta = vectors[i] - means[j] new_covariance += h[i, j] * numpy.multiply.outer(delta, delta) sum_hj += h[i, j] new_mean += h[i, j] * vectors[i] covariances[j] = new_covariance / sum_hj means[j] = new_mean / sum_hj priors[j] = sum_hj / len(vectors) covariances[j] += self._bias * numpy.identity(dimensions, numpy.float64) l = self._loglikelihood(vectors, priors, means, covariances) if abs(lastl - l) < self._conv_threshold: converged = True lastl = l def classify_vectorspace(self, vector): best = None for j in range(self._num_clusters): p = self._priors[j] * self._gaussian( self._means[j], self._covariance_matrices[j], vector ) if not best or p > best[0]: best = (p, j) return best[1] def likelihood_vectorspace(self, vector, cluster): cid = self.cluster_names().index(cluster) return self._priors[cluster] * self._gaussian( self._means[cluster], self._covariance_matrices[cluster], vector ) def _gaussian(self, mean, cvm, x): m = len(mean) assert cvm.shape == (m, m), "bad sized covariance matrix, %s" % str(cvm.shape) try: det = numpy.linalg.det(cvm) inv = numpy.linalg.inv(cvm) a = det**-0.5 * (2 * numpy.pi) ** (-m / 2.0) dx = x - mean print(dx, inv) b = -0.5 * numpy.dot(numpy.dot(dx, inv), dx) return a * numpy.exp(b) except OverflowError: return 0 def _loglikelihood(self, vectors, priors, means, covariances): llh = 0.0 for vector in vectors: p = 0 for j in range(len(priors)): p += priors[j] * self._gaussian(means[j], covariances[j], vector) llh += numpy.log(p) return llh def __repr__(self): return "<EMClusterer means=%s>" % list(self._means) def demo(): from nltk import cluster vectors = [numpy.array(f) for f in [[0.5, 0.5], [1.5, 0.5], [1, 3]]] means = [[4, 2], [4, 2.01]] clusterer = cluster.EMClusterer(means, bias=0.1) clusters = clusterer.cluster(vectors, True, trace=True) print("Clustered:", vectors) print("As: ", clusters) print() for c in range(2): print("Cluster:", c) print("Prior: ", clusterer._priors[c]) print("Mean: ", clusterer._means[c]) print("Covar: ", clusterer._covariance_matrices[c]) print() vector = numpy.array([2, 2]) print("classify(%s):" % vector, end=" ") print(clusterer.classify(vector)) vector = numpy.array([2, 2]) print("classification_probdist(%s):" % vector) pdist = clusterer.classification_probdist(vector) for sample in pdist.samples(): print(f"{sample} => {pdist.prob(sample) * 100:.0f}%") if __name__ == "__main__": demo()
natural language toolkit group average agglomerative clusterer c 20012023 nltk project trevor cohn tacohncs mu oz au url https www nltk org for license information see license txt the group average agglomerative starts with each of the n vectors as singleton clusters it then iteratively merges pairs of clusters which have the closest centroids this continues until there is only one cluster the order of merges gives rise to a dendrogram a tree with the earlier merges lower than later merges the membership of a given number of clusters c 1 c n can be found by cutting the dendrogram at depth c this clusterer uses the cosine similarity metric only which allows for efficient speedup in the clustering process stores the merge order variables describing the initial situation construct the similarity matrix update similarities for merging i and j remove j merge the clusters update the index map to reflect the indexes if we had removed j the new cluster i merged from i and j adopts the average of i and j s similarity to each other cluster weighted by the number of points in the clusters i and j update for xi update for ixj update for ijx return the dendrogram representing the current clustering rtype dendrogram noninteractive demonstration of the clusterers with simple 2d data use a set of tokens with 2d indices test the gaac clusterer with 4 clusters show the dendrogram classify a new vector natural language toolkit group average agglomerative clusterer c 2001 2023 nltk project trevor cohn tacohn cs mu oz au url https www nltk org for license information see license txt the group average agglomerative starts with each of the n vectors as singleton clusters it then iteratively merges pairs of clusters which have the closest centroids this continues until there is only one cluster the order of merges gives rise to a dendrogram a tree with the earlier merges lower than later merges the membership of a given number of clusters c 1 c n can be found by cutting the dendrogram at depth c this clusterer uses the cosine similarity metric only which allows for efficient speed up in the clustering process stores the merge order variables describing the initial situation construct the similarity matrix update similarities for merging i and j remove j merge the clusters update the index map to reflect the indexes if we had removed j the new cluster i merged from i and j adopts the average of i and j s similarity to each other cluster weighted by the number of points in the clusters i and j update for x i update for i x j update for i j x return the dendrogram representing the current clustering rtype dendrogram non interactive demonstration of the clusterers with simple 2 d data use a set of tokens with 2d indices test the gaac clusterer with 4 clusters show the dendrogram classify a new vector
try: import numpy except ImportError: pass from nltk.cluster.util import Dendrogram, VectorSpaceClusterer, cosine_distance class GAAClusterer(VectorSpaceClusterer): def __init__(self, num_clusters=1, normalise=True, svd_dimensions=None): VectorSpaceClusterer.__init__(self, normalise, svd_dimensions) self._num_clusters = num_clusters self._dendrogram = None self._groups_values = None def cluster(self, vectors, assign_clusters=False, trace=False): self._dendrogram = Dendrogram( [numpy.array(vector, numpy.float64) for vector in vectors] ) return VectorSpaceClusterer.cluster(self, vectors, assign_clusters, trace) def cluster_vectorspace(self, vectors, trace=False): N = len(vectors) cluster_len = [1] * N cluster_count = N index_map = numpy.arange(N) dims = (N, N) dist = numpy.ones(dims, dtype=float) * numpy.inf for i in range(N): for j in range(i + 1, N): dist[i, j] = cosine_distance(vectors[i], vectors[j]) while cluster_count > max(self._num_clusters, 1): i, j = numpy.unravel_index(dist.argmin(), dims) if trace: print("merging %d and %d" % (i, j)) self._merge_similarities(dist, cluster_len, i, j) dist[:, j] = numpy.inf dist[j, :] = numpy.inf cluster_len[i] = cluster_len[i] + cluster_len[j] self._dendrogram.merge(index_map[i], index_map[j]) cluster_count -= 1 index_map[j + 1 :] -= 1 index_map[j] = N self.update_clusters(self._num_clusters) def _merge_similarities(self, dist, cluster_len, i, j): i_weight = cluster_len[i] j_weight = cluster_len[j] weight_sum = i_weight + j_weight dist[:i, i] = dist[:i, i] * i_weight + dist[:i, j] * j_weight dist[:i, i] /= weight_sum dist[i, i + 1 : j] = ( dist[i, i + 1 : j] * i_weight + dist[i + 1 : j, j] * j_weight ) dist[i, j + 1 :] = dist[i, j + 1 :] * i_weight + dist[j, j + 1 :] * j_weight dist[i, i + 1 :] /= weight_sum def update_clusters(self, num_clusters): clusters = self._dendrogram.groups(num_clusters) self._centroids = [] for cluster in clusters: assert len(cluster) > 0 if self._should_normalise: centroid = self._normalise(cluster[0]) else: centroid = numpy.array(cluster[0]) for vector in cluster[1:]: if self._should_normalise: centroid += self._normalise(vector) else: centroid += vector centroid /= len(cluster) self._centroids.append(centroid) self._num_clusters = len(self._centroids) def classify_vectorspace(self, vector): best = None for i in range(self._num_clusters): centroid = self._centroids[i] dist = cosine_distance(vector, centroid) if not best or dist < best[0]: best = (dist, i) return best[1] def dendrogram(self): return self._dendrogram def num_clusters(self): return self._num_clusters def __repr__(self): return "<GroupAverageAgglomerative Clusterer n=%d>" % self._num_clusters def demo(): from nltk.cluster import GAAClusterer vectors = [numpy.array(f) for f in [[3, 3], [1, 2], [4, 2], [4, 0], [2, 3], [3, 1]]] clusterer = GAAClusterer(4) clusters = clusterer.cluster(vectors, True) print("Clusterer:", clusterer) print("Clustered:", vectors) print("As:", clusters) print() clusterer.dendrogram().show() vector = numpy.array([3, 3]) print("classify(%s):" % vector, end=" ") print(clusterer.classify(vector)) print() if __name__ == "__main__": demo()
natural language toolkit kmeans clusterer c 20012023 nltk project trevor cohn tacohncs mu oz au url https www nltk org for license information see license txt the kmeans clusterer starts with k arbitrary chosen means then allocates each vector to the cluster with the closest mean it then recalculates the means of each cluster as the centroid of the vectors in the cluster this process repeats until the cluster memberships stabilise this is a hillclimbing algorithm which may converge to a local maximum hence the clustering is often repeated with random initial means and the most commonly occurring output means are chosen param nummeans the number of means to use may use fewer type nummeans int param distance measure of distance between two vectors type distance function taking two vectors and returning a float param repeats number of randomised clustering trials to use type repeats int param convtest maximum variation in mean differences before deemed convergent type convtest number param initialmeans set of k initial means type initialmeans sequence of vectors param normalise should vectors be normalised to length 1 type normalise boolean param svddimensions number of dimensions to use in reducing vector dimensionsionality with svd type svddimensions int param rng random number generator or none type rng random param avoidemptyclusters include current centroid in computation of next one avoids undefined behavior when clusters become empty type avoidemptyclusters boolean sort the means first so that different cluster numbering won t effect the distance comparison find the set of means that s minimally different from the others use the best means perform kmeans clustering assign the tokens to clusters based on minimum distance to the cluster means for i in rangeself nummeans print mean i allocated lenclustersi vectors recalculate cluster means by computing the centroid of each cluster measure the degree of change from the previous step for convergence remember the new means finds the closest cluster centroid returns that cluster s index the means used for clustering example from figure 14 9 page 517 manning and schutze test kmeans using the euclidean distance metric 2 means and repeat clustering 10 times with random seeds classify a new vector natural language toolkit k means clusterer c 2001 2023 nltk project trevor cohn tacohn cs mu oz au url https www nltk org for license information see license txt the k means clusterer starts with k arbitrary chosen means then allocates each vector to the cluster with the closest mean it then recalculates the means of each cluster as the centroid of the vectors in the cluster this process repeats until the cluster memberships stabilise this is a hill climbing algorithm which may converge to a local maximum hence the clustering is often repeated with random initial means and the most commonly occurring output means are chosen param num_means the number of means to use may use fewer type num_means int param distance measure of distance between two vectors type distance function taking two vectors and returning a float param repeats number of randomised clustering trials to use type repeats int param conv_test maximum variation in mean differences before deemed convergent type conv_test number param initial_means set of k initial means type initial_means sequence of vectors param normalise should vectors be normalised to length 1 type normalise boolean param svd_dimensions number of dimensions to use in reducing vector dimensionsionality with svd type svd_dimensions int param rng random number generator or none type rng random param avoid_empty_clusters include current centroid in computation of next one avoids undefined behavior when clusters become empty type avoid_empty_clusters boolean sort the means first so that different cluster numbering won t effect the distance comparison find the set of means that s minimally different from the others use the best means perform k means clustering assign the tokens to clusters based on minimum distance to the cluster means for i in range self _num_means print mean i allocated len clusters i vectors recalculate cluster means by computing the centroid of each cluster measure the degree of change from the previous step for convergence remember the new means finds the closest cluster centroid returns that cluster s index the means used for clustering example from figure 14 9 page 517 manning and schutze test k means using the euclidean distance metric 2 means and repeat clustering 10 times with random seeds classify a new vector
import copy import random import sys try: import numpy except ImportError: pass from nltk.cluster.util import VectorSpaceClusterer class KMeansClusterer(VectorSpaceClusterer): def __init__( self, num_means, distance, repeats=1, conv_test=1e-6, initial_means=None, normalise=False, svd_dimensions=None, rng=None, avoid_empty_clusters=False, ): VectorSpaceClusterer.__init__(self, normalise, svd_dimensions) self._num_means = num_means self._distance = distance self._max_difference = conv_test assert not initial_means or len(initial_means) == num_means self._means = initial_means assert repeats >= 1 assert not (initial_means and repeats > 1) self._repeats = repeats self._rng = rng if rng else random.Random() self._avoid_empty_clusters = avoid_empty_clusters def cluster_vectorspace(self, vectors, trace=False): if self._means and self._repeats > 1: print("Warning: means will be discarded for subsequent trials") meanss = [] for trial in range(self._repeats): if trace: print("k-means trial", trial) if not self._means or trial > 1: self._means = self._rng.sample(list(vectors), self._num_means) self._cluster_vectorspace(vectors, trace) meanss.append(self._means) if len(meanss) > 1: for means in meanss: means.sort(key=sum) min_difference = min_means = None for i in range(len(meanss)): d = 0 for j in range(len(meanss)): if i != j: d += self._sum_distances(meanss[i], meanss[j]) if min_difference is None or d < min_difference: min_difference, min_means = d, meanss[i] self._means = min_means def _cluster_vectorspace(self, vectors, trace=False): if self._num_means < len(vectors): converged = False while not converged: clusters = [[] for m in range(self._num_means)] for vector in vectors: index = self.classify_vectorspace(vector) clusters[index].append(vector) if trace: print("iteration") new_means = list(map(self._centroid, clusters, self._means)) difference = self._sum_distances(self._means, new_means) if difference < self._max_difference: converged = True self._means = new_means def classify_vectorspace(self, vector): best_distance = best_index = None for index in range(len(self._means)): mean = self._means[index] dist = self._distance(vector, mean) if best_distance is None or dist < best_distance: best_index, best_distance = index, dist return best_index def num_clusters(self): if self._means: return len(self._means) else: return self._num_means def means(self): return self._means def _sum_distances(self, vectors1, vectors2): difference = 0.0 for u, v in zip(vectors1, vectors2): difference += self._distance(u, v) return difference def _centroid(self, cluster, mean): if self._avoid_empty_clusters: centroid = copy.copy(mean) for vector in cluster: centroid += vector return centroid / (1 + len(cluster)) else: if not len(cluster): sys.stderr.write("Error: no centroid defined for empty cluster.\n") sys.stderr.write( "Try setting argument 'avoid_empty_clusters' to True\n" ) assert False centroid = copy.copy(cluster[0]) for vector in cluster[1:]: centroid += vector return centroid / len(cluster) def __repr__(self): return "<KMeansClusterer means=%s repeats=%d>" % (self._means, self._repeats) def demo(): from nltk.cluster import KMeansClusterer, euclidean_distance vectors = [numpy.array(f) for f in [[2, 1], [1, 3], [4, 7], [6, 7]]] means = [[4, 3], [5, 5]] clusterer = KMeansClusterer(2, euclidean_distance, initial_means=means) clusters = clusterer.cluster(vectors, True, trace=True) print("Clustered:", vectors) print("As:", clusters) print("Means:", clusterer.means()) print() vectors = [numpy.array(f) for f in [[3, 3], [1, 2], [4, 2], [4, 0], [2, 3], [3, 1]]] clusterer = KMeansClusterer(2, euclidean_distance, repeats=10) clusters = clusterer.cluster(vectors, True) print("Clustered:", vectors) print("As:", clusters) print("Means:", clusterer.means()) print() vector = numpy.array([3, 3]) print("classify(%s):" % vector, end=" ") print(clusterer.classify(vector)) print() if __name__ == "__main__": demo()
natural language toolkit collections c 20012023 nltk project steven bird stevenbird1gmail com url https www nltk org for license information see license txt this unused import is for python 2 7 ordered dictionary returns iterator under python 3 and list under python 2 returns iterator under python 3 lazy sequences an abstract base class for readonly sequences whose values are computed as needed lazy sequences act like tuples they can be indexed sliced and iterated over but they may not be modified the most common application of lazy sequences in nltk is for corpus view objects which provide access to the contents of a corpus without loading the entire corpus into memory by loading pieces of the corpus from disk as needed the result of modifying a mutable element of a lazy sequence is undefined in particular the modifications made to the element may or may not persist depending on whether and when the lazy sequence caches that element s value or reconstructs it from scratch subclasses are required to define two methods len and iteratefrom return the number of tokens in the corpus file underlying this corpus view return an iterator that generates the tokens in the corpus file underlying this corpus view starting at the token number start if startlenself then this iterator will generate no tokens return the i th token in the corpus file underlying this corpus view negative indices and spans are both supported handle negative indices use iteratefrom to extract it return an iterator that generates the tokens in the corpus file underlying this corpus view return self iteratefrom0 def countself value return the index of the first occurrence of value in this list that is greater than or equal to start and less than stop negative start and stop values are treated like negative slice bounds i e they count from the end of the list start stop sliceboundsself slicestart stop for i elt in enumerateisliceself start stop if elt value return i start raise valueerrorindexx x not in list def containsself value return a list concatenating self with other return lazyconcatenationself other def raddself other return a list concatenating self with itself count times return lazyconcatenationself count def rmulself count return a string representation for this corpus view that is similar to a list s representation but if it would be more than 60 characters long it is truncated raise valueerror corpus view objects are unhashable a subsequence produced by slicing a lazy sequence this slice keeps a reference to its source sequence and generates its values by looking them up in the source sequence the minimum size for which lazy slices should be created if lazysubsequence is called with a subsequence that is shorter than minsize then a tuple will be returned instead construct a new slice from a given underlying sequence the start and stop indices should be absolute indices i e they should not be negative for indexing from the back of a list or greater than the length of source if the slice is small enough just use a tuple a lazy sequence formed by concatenating a list of lists this underlying list of lists may itself be lazy lazyconcatenation maintains an index that it uses to keep track of the relationship between offsets in the concatenated lists and offsets in the sublists construct an iterator over the sublists a lazy sequence whose elements are formed by applying a given function to each element in one or more underlying lists the function is applied lazily i e when you read a value from the list lazymap will calculate that value by applying its function to the underlying lists values lazymap is essentially a lazy version of the python primitive function map in particular the following two expressions are equivalent from nltk collections import lazymap function str sequence 1 2 3 mapfunction sequence doctest skip 1 2 3 listlazymapfunction sequence 1 2 3 like the python map primitive if the source lists do not have equal size then the value none will be supplied for the missing elements lazy maps can be useful for conserving memory in cases where individual values take up a lot of space this is especially true if the underlying list s values are constructed lazily as is the case with many corpus readers a typical example of a use case for this class is performing feature detection on the tokens in a corpus since featuresets are encoded as dictionaries which can take up a lot of memory using a lazymap can significantly reduce memory usage when training and running classifiers param function the function that should be applied to elements of lists it should take as many arguments as there are lists param lists the underlying lists param cachesize determines the size of the cache used by this lazy map default5 if you just take bool of sum here alllazy will be true just in case n 1 list is an abstractlazysequence presumably this isn t what s intended special case one lazy sublist special case one nonlazy sublist special case n lazy sublists general case handle negative indices check the cache calculate the value update the cache return the value a lazy sequence whose elements are tuples each containing the ith element from each of the argument sequences the returned list is truncated in length to the length of the shortest argument sequence the tuples are constructed lazily i e when you read a value from the list lazyzip will calculate that value by forming a tuple from the ith element of each of the argument sequences lazyzip is essentially a lazy version of the python primitive function zip in particular an evaluated lazyzip is equivalent to a zip from nltk collections import lazyzip sequence1 sequence2 1 2 3 a b c zipsequence1 sequence2 doctest skip 1 a 2 b 3 c listlazyzipsequence1 sequence2 1 a 2 b 3 c sequences sequence1 sequence2 6 7 8 9 listzipsequences listlazyzipsequences true lazy zips can be useful for conserving memory in cases where the argument sequences are particularly long a typical example of a use case for this class is combining long sequences of gold standard and predicted values in a classification or tagging task in order to calculate accuracy by constructing tuples lazily and avoiding the creation of an additional long sequence memory usage can be significantly reduced param lists the underlying lists type lists listlist a lazy sequence whose elements are tuples each containing a count from zero and a value yielded by underlying sequence lazyenumerate is useful for obtaining an indexed list the tuples are constructed lazily i e when you read a value from the list lazyenumerate will calculate that value by forming a tuple from the count of the ith element and the ith element of the underlying sequence lazyenumerate is essentially a lazy version of the python primitive function enumerate in particular the following two expressions are equivalent from nltk collections import lazyenumerate sequence first second third listenumeratesequence 0 first 1 second 2 third listlazyenumeratesequence 0 first 1 second 2 third lazy enumerations can be useful for conserving memory in cases where the argument sequences are particularly long a typical example of a use case for this class is obtaining an indexed list for a long sequence of values by constructing tuples lazily and avoiding the creation of an additional long sequence memory usage can be significantly reduced param lst the underlying list type lst list wraps an iterator loading its elements on demand and making them subscriptable repr displays only the first few elements create a new iterator over this list starting at the given offset while lenself cache start v nextself it self cache appendv i start while i lenself cache yield self cachei i 1 try while true v nextself it self cache appendv yield v except stopiteration pass def addself other return a list concatenating other with self return typeselfchainother self trie implementation class triedict builds a trie object which is built around a dict if strings is provided it will add the strings which consist of a list of strings to the trie otherwise it ll construct an empty trie param strings list of strings to insert into the trie default is none type strings liststr inserts string into the trie param string string to insert into the trie type string str example from nltk collections import trie trie trieabc def expected a b c true none d e f true none trie expected true mark the string is complete natural language toolkit collections c 2001 2023 nltk project steven bird stevenbird1 gmail com url https www nltk org for license information see license txt this unused import is for python 2 7 ordered dictionary returns iterator under python 3 and list under python 2 returns iterator under python 3 lazy sequences an abstract base class for read only sequences whose values are computed as needed lazy sequences act like tuples they can be indexed sliced and iterated over but they may not be modified the most common application of lazy sequences in nltk is for corpus view objects which provide access to the contents of a corpus without loading the entire corpus into memory by loading pieces of the corpus from disk as needed the result of modifying a mutable element of a lazy sequence is undefined in particular the modifications made to the element may or may not persist depending on whether and when the lazy sequence caches that element s value or reconstructs it from scratch subclasses are required to define two methods __len__ and iterate_from return the number of tokens in the corpus file underlying this corpus view return an iterator that generates the tokens in the corpus file underlying this corpus view starting at the token number start if start len self then this iterator will generate no tokens return the i th token in the corpus file underlying this corpus view negative indices and spans are both supported handle negative indices use iterate_from to extract it return an iterator that generates the tokens in the corpus file underlying this corpus view return the number of times this list contains value return the index of the first occurrence of value in this list that is greater than or equal to start and less than stop negative start and stop values are treated like negative slice bounds i e they count from the end of the list return true if this list contains value return a list concatenating self with other return a list concatenating other with self return a list concatenating self with itself count times return a list concatenating self with itself count times return a string representation for this corpus view that is similar to a list s representation but if it would be more than 60 characters long it is truncated raise valueerror corpus view objects are unhashable a subsequence produced by slicing a lazy sequence this slice keeps a reference to its source sequence and generates its values by looking them up in the source sequence the minimum size for which lazy slices should be created if lazysubsequence is called with a subsequence that is shorter than min_size then a tuple will be returned instead construct a new slice from a given underlying sequence the start and stop indices should be absolute indices i e they should not be negative for indexing from the back of a list or greater than the length of source if the slice is small enough just use a tuple a lazy sequence formed by concatenating a list of lists this underlying list of lists may itself be lazy lazyconcatenation maintains an index that it uses to keep track of the relationship between offsets in the concatenated lists and offsets in the sublists construct an iterator over the sublists a lazy sequence whose elements are formed by applying a given function to each element in one or more underlying lists the function is applied lazily i e when you read a value from the list lazymap will calculate that value by applying its function to the underlying lists value s lazymap is essentially a lazy version of the python primitive function map in particular the following two expressions are equivalent from nltk collections import lazymap function str sequence 1 2 3 map function sequence doctest skip 1 2 3 list lazymap function sequence 1 2 3 like the python map primitive if the source lists do not have equal size then the value none will be supplied for the missing elements lazy maps can be useful for conserving memory in cases where individual values take up a lot of space this is especially true if the underlying list s values are constructed lazily as is the case with many corpus readers a typical example of a use case for this class is performing feature detection on the tokens in a corpus since featuresets are encoded as dictionaries which can take up a lot of memory using a lazymap can significantly reduce memory usage when training and running classifiers param function the function that should be applied to elements of lists it should take as many arguments as there are lists param lists the underlying lists param cache_size determines the size of the cache used by this lazy map default 5 if you just take bool of sum here _all_lazy will be true just in case n 1 list is an abstractlazysequence presumably this isn t what s intended special case one lazy sublist special case one non lazy sublist special case n lazy sublists fixme what is this except really catching stopiteration general case handle negative indices check the cache calculate the value update the cache discard random entry return the value a lazy sequence whose elements are tuples each containing the i th element from each of the argument sequences the returned list is truncated in length to the length of the shortest argument sequence the tuples are constructed lazily i e when you read a value from the list lazyzip will calculate that value by forming a tuple from the i th element of each of the argument sequences lazyzip is essentially a lazy version of the python primitive function zip in particular an evaluated lazyzip is equivalent to a zip from nltk collections import lazyzip sequence1 sequence2 1 2 3 a b c zip sequence1 sequence2 doctest skip 1 a 2 b 3 c list lazyzip sequence1 sequence2 1 a 2 b 3 c sequences sequence1 sequence2 6 7 8 9 list zip sequences list lazyzip sequences true lazy zips can be useful for conserving memory in cases where the argument sequences are particularly long a typical example of a use case for this class is combining long sequences of gold standard and predicted values in a classification or tagging task in order to calculate accuracy by constructing tuples lazily and avoiding the creation of an additional long sequence memory usage can be significantly reduced param lists the underlying lists type lists list list a lazy sequence whose elements are tuples each containing a count from zero and a value yielded by underlying sequence lazyenumerate is useful for obtaining an indexed list the tuples are constructed lazily i e when you read a value from the list lazyenumerate will calculate that value by forming a tuple from the count of the i th element and the i th element of the underlying sequence lazyenumerate is essentially a lazy version of the python primitive function enumerate in particular the following two expressions are equivalent from nltk collections import lazyenumerate sequence first second third list enumerate sequence 0 first 1 second 2 third list lazyenumerate sequence 0 first 1 second 2 third lazy enumerations can be useful for conserving memory in cases where the argument sequences are particularly long a typical example of a use case for this class is obtaining an indexed list for a long sequence of values by constructing tuples lazily and avoiding the creation of an additional long sequence memory usage can be significantly reduced param lst the underlying list type lst list wraps an iterator loading its elements on demand and making them subscriptable __repr__ displays only the first few elements create a new iterator over this list starting at the given offset return a list concatenating self with other return a list concatenating other with self trie implementation a trie implementation for strings builds a trie object which is built around a dict if strings is provided it will add the strings which consist of a list of strings to the trie otherwise it ll construct an empty trie param strings list of strings to insert into the trie default is none type strings list str inserts string into the trie param string string to insert into the trie type string str example from nltk collections import trie trie trie abc def expected a b c true none d e f true none trie expected true mark the string is complete
import bisect from collections import Counter, defaultdict, deque from functools import total_ordering from itertools import chain, islice from nltk.internals import raise_unorderable_types, slice_bounds class OrderedDict(dict): def __init__(self, data=None, **kwargs): self._keys = self.keys(data, kwargs.get("keys")) self._default_factory = kwargs.get("default_factory") if data is None: dict.__init__(self) else: dict.__init__(self, data) def __delitem__(self, key): dict.__delitem__(self, key) self._keys.remove(key) def __getitem__(self, key): try: return dict.__getitem__(self, key) except KeyError: return self.__missing__(key) def __iter__(self): return (key for key in self.keys()) def __missing__(self, key): if not self._default_factory and key not in self._keys: raise KeyError() return self._default_factory() def __setitem__(self, key, item): dict.__setitem__(self, key, item) if key not in self._keys: self._keys.append(key) def clear(self): dict.clear(self) self._keys.clear() def copy(self): d = dict.copy(self) d._keys = self._keys return d def items(self): return zip(self.keys(), self.values()) def keys(self, data=None, keys=None): if data: if keys: assert isinstance(keys, list) assert len(data) == len(keys) return keys else: assert ( isinstance(data, dict) or isinstance(data, OrderedDict) or isinstance(data, list) ) if isinstance(data, dict) or isinstance(data, OrderedDict): return data.keys() elif isinstance(data, list): return [key for (key, value) in data] elif "_keys" in self.__dict__: return self._keys else: return [] def popitem(self): if not self._keys: raise KeyError() key = self._keys.pop() value = self[key] del self[key] return (key, value) def setdefault(self, key, failobj=None): dict.setdefault(self, key, failobj) if key not in self._keys: self._keys.append(key) def update(self, data): dict.update(self, data) for key in self.keys(data): if key not in self._keys: self._keys.append(key) def values(self): return map(self.get, self._keys) @total_ordering class AbstractLazySequence: def __len__(self): raise NotImplementedError("should be implemented by subclass") def iterate_from(self, start): raise NotImplementedError("should be implemented by subclass") def __getitem__(self, i): if isinstance(i, slice): start, stop = slice_bounds(self, i) return LazySubsequence(self, start, stop) else: if i < 0: i += len(self) if i < 0: raise IndexError("index out of range") try: return next(self.iterate_from(i)) except StopIteration as e: raise IndexError("index out of range") from e def __iter__(self): return self.iterate_from(0) def count(self, value): return sum(1 for elt in self if elt == value) def index(self, value, start=None, stop=None): start, stop = slice_bounds(self, slice(start, stop)) for i, elt in enumerate(islice(self, start, stop)): if elt == value: return i + start raise ValueError("index(x): x not in list") def __contains__(self, value): return bool(self.count(value)) def __add__(self, other): return LazyConcatenation([self, other]) def __radd__(self, other): return LazyConcatenation([other, self]) def __mul__(self, count): return LazyConcatenation([self] * count) def __rmul__(self, count): return LazyConcatenation([self] * count) _MAX_REPR_SIZE = 60 def __repr__(self): pieces = [] length = 5 for elt in self: pieces.append(repr(elt)) length += len(pieces[-1]) + 2 if length > self._MAX_REPR_SIZE and len(pieces) > 2: return "[%s, ...]" % ", ".join(pieces[:-1]) return "[%s]" % ", ".join(pieces) def __eq__(self, other): return type(self) == type(other) and list(self) == list(other) def __ne__(self, other): return not self == other def __lt__(self, other): if type(other) != type(self): raise_unorderable_types("<", self, other) return list(self) < list(other) def __hash__(self): raise ValueError("%s objects are unhashable" % self.__class__.__name__) class LazySubsequence(AbstractLazySequence): MIN_SIZE = 100 def __new__(cls, source, start, stop): if stop - start < cls.MIN_SIZE: return list(islice(source.iterate_from(start), stop - start)) else: return object.__new__(cls) def __init__(self, source, start, stop): self._source = source self._start = start self._stop = stop def __len__(self): return self._stop - self._start def iterate_from(self, start): return islice( self._source.iterate_from(start + self._start), max(0, len(self) - start) ) class LazyConcatenation(AbstractLazySequence): def __init__(self, list_of_lists): self._list = list_of_lists self._offsets = [0] def __len__(self): if len(self._offsets) <= len(self._list): for _ in self.iterate_from(self._offsets[-1]): pass return self._offsets[-1] def iterate_from(self, start_index): if start_index < self._offsets[-1]: sublist_index = bisect.bisect_right(self._offsets, start_index) - 1 else: sublist_index = len(self._offsets) - 1 index = self._offsets[sublist_index] if isinstance(self._list, AbstractLazySequence): sublist_iter = self._list.iterate_from(sublist_index) else: sublist_iter = islice(self._list, sublist_index, None) for sublist in sublist_iter: if sublist_index == (len(self._offsets) - 1): assert ( index + len(sublist) >= self._offsets[-1] ), "offsets not monotonic increasing!" self._offsets.append(index + len(sublist)) else: assert self._offsets[sublist_index + 1] == index + len( sublist ), "inconsistent list value (num elts)" yield from sublist[max(0, start_index - index) :] index += len(sublist) sublist_index += 1 class LazyMap(AbstractLazySequence): def __init__(self, function, *lists, **config): if not lists: raise TypeError("LazyMap requires at least two args") self._lists = lists self._func = function self._cache_size = config.get("cache_size", 5) self._cache = {} if self._cache_size > 0 else None self._all_lazy = sum( isinstance(lst, AbstractLazySequence) for lst in lists ) == len(lists) def iterate_from(self, index): if len(self._lists) == 1 and self._all_lazy: for value in self._lists[0].iterate_from(index): yield self._func(value) return elif len(self._lists) == 1: while True: try: yield self._func(self._lists[0][index]) except IndexError: return index += 1 elif self._all_lazy: iterators = [lst.iterate_from(index) for lst in self._lists] while True: elements = [] for iterator in iterators: try: elements.append(next(iterator)) except: elements.append(None) if elements == [None] * len(self._lists): return yield self._func(*elements) index += 1 else: while True: try: elements = [lst[index] for lst in self._lists] except IndexError: elements = [None] * len(self._lists) for i, lst in enumerate(self._lists): try: elements[i] = lst[index] except IndexError: pass if elements == [None] * len(self._lists): return yield self._func(*elements) index += 1 def __getitem__(self, index): if isinstance(index, slice): sliced_lists = [lst[index] for lst in self._lists] return LazyMap(self._func, *sliced_lists) else: if index < 0: index += len(self) if index < 0: raise IndexError("index out of range") if self._cache is not None and index in self._cache: return self._cache[index] try: val = next(self.iterate_from(index)) except StopIteration as e: raise IndexError("index out of range") from e if self._cache is not None: if len(self._cache) > self._cache_size: self._cache.popitem() self._cache[index] = val return val def __len__(self): return max(len(lst) for lst in self._lists) class LazyZip(LazyMap): def __init__(self, *lists): LazyMap.__init__(self, lambda *elts: elts, *lists) def iterate_from(self, index): iterator = LazyMap.iterate_from(self, index) while index < len(self): yield next(iterator) index += 1 return def __len__(self): return min(len(lst) for lst in self._lists) class LazyEnumerate(LazyZip): def __init__(self, lst): LazyZip.__init__(self, range(len(lst)), lst) class LazyIteratorList(AbstractLazySequence): def __init__(self, it, known_len=None): self._it = it self._len = known_len self._cache = [] def __len__(self): if self._len: return self._len for _ in self.iterate_from(len(self._cache)): pass self._len = len(self._cache) return self._len def iterate_from(self, start): while len(self._cache) < start: v = next(self._it) self._cache.append(v) i = start while i < len(self._cache): yield self._cache[i] i += 1 try: while True: v = next(self._it) self._cache.append(v) yield v except StopIteration: pass def __add__(self, other): return type(self)(chain(self, other)) def __radd__(self, other): return type(self)(chain(other, self)) class Trie(dict): LEAF = True def __init__(self, strings=None): super().__init__() if strings: for string in strings: self.insert(string) def insert(self, string): if len(string): self[string[0]].insert(string[1:]) else: self[Trie.LEAF] = None def __missing__(self, key): self[key] = Trie() return self[key]
natural language toolkit collocations and association measures c 20012023 nltk project joel nothman jnothmanstudent usyd edu au url https www nltk org for license information see license txt tools to identify collocations words that often appear consecutively within corpora they may also be used to find other associations between word occurrences see manning and schutze ch 5 at https nlp stanford edufsnlppromocolloc pdf and the text nsp perl package at http ngram sourceforge net finding collocations requires first calculating the frequencies of words and their appearance in the context of other words often the collection of words will then requiring filtering to only retain useful content terms each ngram of words may then be scored according to some association measure in order to determine the relative likelihood of each ngram being a collocation the bigramcollocationfinder and trigramcollocationfinder classes provide these functionalities dependent on being provided a function which scores a ngram given appropriate frequency counts a number of standard association measures are provided in bigrammeasures and trigrammeasures possible todos consider the distinction between fx and fx and whether our approximation is good enough for fragmented data and mention it add a ngram collocation finder with measures which only utilise ngram and unigram counts rawfreq pmi studentt these two unused imports are referenced in collocations doctest an abstract base class for collocation finders whose purpose is to collect collocation candidate frequencies filter and rank them as a minimum collocation finders require the frequencies of each word in a corpus and the joint frequency of word tuples this data should be provided through nltk probability freqdist objects or an identical interface pad the document with the place holder according to the windowsize constructs a collocation finder given a collection of documents each of which is a list or iterable of tokens return cls fromwordsitertools chaindocuments generic filter removes ngrams from the frequency distribution if the function returns true when passed an ngram tuple removes candidate ngrams which have frequency less than minfreq self applyfilterlambda ng freq freq minfreq def applyngramfilterself fn self applyfilterlambda ng f fnng def applywordfilterself fn self applyfilterlambda ng f anyfnw for w in ng def scorengramsself scorefn for tup in self ngramfd score self scorengramscorefn tup if score is not none yield tup score def scorengramsself scorefn return sortedself scorengramsscorefn keylambda t t1 t0 def nbestself scorefn n returns a sequence of ngrams ordered by decreasing score whose scores each exceed the given minimum score a tool for the finding and ranking of bigram collocations or other association measures it is often useful to use fromwords rather than constructing an instance directly construct a bigramcollocationfinder given freqdists for appearances of words and possibly noncontiguous bigrams construct a bigramcollocationfinder for all bigrams in the given sequence when windowsize 2 count noncontiguous bigrams in the style of church and hanks s 1990 association ratio returns the score for a given bigram using the given scoring function following church and hanks 1990 counts are scaled by a factor of 1windowsize 1 a tool for the finding and ranking of trigram collocations or other association measures it is often useful to use fromwords rather than constructing an instance directly construct a trigramcollocationfinder given freqdists for appearances of words bigrams two words with any word between them and trigrams construct a trigramcollocationfinder for all trigrams in the given sequence constructs a bigram collocation finder with the bigram and unigram data from this finder note that this does not include any filtering applied to this finder returns the score for a given trigram using the given scoring function a tool for the finding and ranking of quadgram collocations or other association measures it is often useful to use fromwords rather than constructing an instance directly construct a quadgramcollocationfinder given freqdists for appearances of words bigrams trigrams two words with one word and two words between them three words with a word between them in both variations finds bigram collocations in the files of the webtext corpus from nltk metrics import bigramassocmeasures ranksfromscores spearmancorrelation if scorer is none scorer bigramassocmeasures likelihoodratio if comparescorer is none comparescorer bigramassocmeasures rawfreq from nltk corpus import stopwords webtext ignoredwords stopwords wordsenglish wordfilter lambda w lenw 3 or w lower in ignoredwords for file in webtext fileids words word lower for word in webtext wordsfile cf bigramcollocationfinder fromwordswords cf applyfreqfilter3 cf applywordfilterwordfilter corr spearmancorrelation ranksfromscorescf scorengramsscorer ranksfromscorescf scorengramscomparescorer printfile printt jointup for tup in cf nbestscorer 15 printft correlation to comparescorer name corr 0 4f slows down loading too much bigrammeasures bigramassocmeasures trigrammeasures trigramassocmeasures if name main import sys from nltk metrics import bigramassocmeasures try scorer evalbigramassocmeasures sys argv1 except indexerror scorer none try comparescorer evalbigramassocmeasures sys argv2 except indexerror comparescorer none demoscorer comparescorer all bigramcollocationfinder trigramcollocationfinder quadgramcollocationfinder natural language toolkit collocations and association measures c 2001 2023 nltk project joel nothman jnothman student usyd edu au url https www nltk org for license information see license txt tools to identify collocations words that often appear consecutively within corpora they may also be used to find other associations between word occurrences see manning and schutze ch 5 at https nlp stanford edu fsnlp promo colloc pdf and the text nsp perl package at http ngram sourceforge net finding collocations requires first calculating the frequencies of words and their appearance in the context of other words often the collection of words will then requiring filtering to only retain useful content terms each ngram of words may then be scored according to some association measure in order to determine the relative likelihood of each ngram being a collocation the bigramcollocationfinder and trigramcollocationfinder classes provide these functionalities dependent on being provided a function which scores a ngram given appropriate frequency counts a number of standard association measures are provided in bigram_measures and trigram_measures possible todos consider the distinction between f x _ and f x and whether our approximation is good enough for fragmented data and mention it add a n gram collocation finder with measures which only utilise n gram and unigram counts raw_freq pmi student_t these two unused imports are referenced in collocations doctest an abstract base class for collocation finders whose purpose is to collect collocation candidate frequencies filter and rank them as a minimum collocation finders require the frequencies of each word in a corpus and the joint frequency of word tuples this data should be provided through nltk probability freqdist objects or an identical interface pad the document with the place holder according to the window_size constructs a collocation finder given a collection of documents each of which is a list or iterable of tokens return cls from_words _itertools chain documents generic filter removes ngrams from the frequency distribution if the function returns true when passed an ngram tuple removes candidate ngrams which have frequency less than min_freq removes candidate ngrams w1 w2 where fn w1 w2 evaluates to true removes candidate ngrams w1 w2 where any of fn w1 fn w2 evaluates to true generates of ngram score pairs as determined by the scoring function provided returns a sequence of ngram score pairs ordered from highest to lowest score as determined by the scoring function provided returns the top n ngrams when scored by the given function returns a sequence of ngrams ordered by decreasing score whose scores each exceed the given minimum score a tool for the finding and ranking of bigram collocations or other association measures it is often useful to use from_words rather than constructing an instance directly construct a bigramcollocationfinder given freqdists for appearances of words and possibly non contiguous bigrams construct a bigramcollocationfinder for all bigrams in the given sequence when window_size 2 count non contiguous bigrams in the style of church and hanks s 1990 association ratio returns the score for a given bigram using the given scoring function following church and hanks 1990 counts are scaled by a factor of 1 window_size 1 a tool for the finding and ranking of trigram collocations or other association measures it is often useful to use from_words rather than constructing an instance directly construct a trigramcollocationfinder given freqdists for appearances of words bigrams two words with any word between them and trigrams construct a trigramcollocationfinder for all trigrams in the given sequence constructs a bigram collocation finder with the bigram and unigram data from this finder note that this does not include any filtering applied to this finder returns the score for a given trigram using the given scoring function a tool for the finding and ranking of quadgram collocations or other association measures it is often useful to use from_words rather than constructing an instance directly construct a quadgramcollocationfinder given freqdists for appearances of words bigrams trigrams two words with one word and two words between them three words with a word between them in both variations finds bigram collocations in the files of the webtext corpus slows down loading too much bigram_measures bigramassocmeasures trigram_measures trigramassocmeasures
import itertools as _itertools from nltk.metrics import ( BigramAssocMeasures, ContingencyMeasures, QuadgramAssocMeasures, TrigramAssocMeasures, ) from nltk.metrics.spearman import ranks_from_scores, spearman_correlation from nltk.probability import FreqDist from nltk.util import ngrams class AbstractCollocationFinder: def __init__(self, word_fd, ngram_fd): self.word_fd = word_fd self.N = word_fd.N() self.ngram_fd = ngram_fd @classmethod def _build_new_documents( cls, documents, window_size, pad_left=False, pad_right=False, pad_symbol=None ): padding = (pad_symbol,) * (window_size - 1) if pad_right: return _itertools.chain.from_iterable( _itertools.chain(doc, padding) for doc in documents ) if pad_left: return _itertools.chain.from_iterable( _itertools.chain(padding, doc) for doc in documents ) @classmethod def from_documents(cls, documents): return cls.from_words( cls._build_new_documents(documents, cls.default_ws, pad_right=True) ) @staticmethod def _ngram_freqdist(words, n): return FreqDist(tuple(words[i : i + n]) for i in range(len(words) - 1)) def _apply_filter(self, fn=lambda ngram, freq: False): tmp_ngram = FreqDist() for ngram, freq in self.ngram_fd.items(): if not fn(ngram, freq): tmp_ngram[ngram] = freq self.ngram_fd = tmp_ngram def apply_freq_filter(self, min_freq): self._apply_filter(lambda ng, freq: freq < min_freq) def apply_ngram_filter(self, fn): self._apply_filter(lambda ng, f: fn(*ng)) def apply_word_filter(self, fn): self._apply_filter(lambda ng, f: any(fn(w) for w in ng)) def _score_ngrams(self, score_fn): for tup in self.ngram_fd: score = self.score_ngram(score_fn, *tup) if score is not None: yield tup, score def score_ngrams(self, score_fn): return sorted(self._score_ngrams(score_fn), key=lambda t: (-t[1], t[0])) def nbest(self, score_fn, n): return [p for p, s in self.score_ngrams(score_fn)[:n]] def above_score(self, score_fn, min_score): for ngram, score in self.score_ngrams(score_fn): if score > min_score: yield ngram else: break class BigramCollocationFinder(AbstractCollocationFinder): default_ws = 2 def __init__(self, word_fd, bigram_fd, window_size=2): AbstractCollocationFinder.__init__(self, word_fd, bigram_fd) self.window_size = window_size @classmethod def from_words(cls, words, window_size=2): wfd = FreqDist() bfd = FreqDist() if window_size < 2: raise ValueError("Specify window_size at least 2") for window in ngrams(words, window_size, pad_right=True): w1 = window[0] if w1 is None: continue wfd[w1] += 1 for w2 in window[1:]: if w2 is not None: bfd[(w1, w2)] += 1 return cls(wfd, bfd, window_size=window_size) def score_ngram(self, score_fn, w1, w2): n_all = self.N n_ii = self.ngram_fd[(w1, w2)] / (self.window_size - 1.0) if not n_ii: return n_ix = self.word_fd[w1] n_xi = self.word_fd[w2] return score_fn(n_ii, (n_ix, n_xi), n_all) class TrigramCollocationFinder(AbstractCollocationFinder): default_ws = 3 def __init__(self, word_fd, bigram_fd, wildcard_fd, trigram_fd): AbstractCollocationFinder.__init__(self, word_fd, trigram_fd) self.wildcard_fd = wildcard_fd self.bigram_fd = bigram_fd @classmethod def from_words(cls, words, window_size=3): if window_size < 3: raise ValueError("Specify window_size at least 3") wfd = FreqDist() wildfd = FreqDist() bfd = FreqDist() tfd = FreqDist() for window in ngrams(words, window_size, pad_right=True): w1 = window[0] if w1 is None: continue for w2, w3 in _itertools.combinations(window[1:], 2): wfd[w1] += 1 if w2 is None: continue bfd[(w1, w2)] += 1 if w3 is None: continue wildfd[(w1, w3)] += 1 tfd[(w1, w2, w3)] += 1 return cls(wfd, bfd, wildfd, tfd) def bigram_finder(self): return BigramCollocationFinder(self.word_fd, self.bigram_fd) def score_ngram(self, score_fn, w1, w2, w3): n_all = self.N n_iii = self.ngram_fd[(w1, w2, w3)] if not n_iii: return n_iix = self.bigram_fd[(w1, w2)] n_ixi = self.wildcard_fd[(w1, w3)] n_xii = self.bigram_fd[(w2, w3)] n_ixx = self.word_fd[w1] n_xix = self.word_fd[w2] n_xxi = self.word_fd[w3] return score_fn(n_iii, (n_iix, n_ixi, n_xii), (n_ixx, n_xix, n_xxi), n_all) class QuadgramCollocationFinder(AbstractCollocationFinder): default_ws = 4 def __init__(self, word_fd, quadgram_fd, ii, iii, ixi, ixxi, iixi, ixii): AbstractCollocationFinder.__init__(self, word_fd, quadgram_fd) self.iii = iii self.ii = ii self.ixi = ixi self.ixxi = ixxi self.iixi = iixi self.ixii = ixii @classmethod def from_words(cls, words, window_size=4): if window_size < 4: raise ValueError("Specify window_size at least 4") ixxx = FreqDist() iiii = FreqDist() ii = FreqDist() iii = FreqDist() ixi = FreqDist() ixxi = FreqDist() iixi = FreqDist() ixii = FreqDist() for window in ngrams(words, window_size, pad_right=True): w1 = window[0] if w1 is None: continue for w2, w3, w4 in _itertools.combinations(window[1:], 3): ixxx[w1] += 1 if w2 is None: continue ii[(w1, w2)] += 1 if w3 is None: continue iii[(w1, w2, w3)] += 1 ixi[(w1, w3)] += 1 if w4 is None: continue iiii[(w1, w2, w3, w4)] += 1 ixxi[(w1, w4)] += 1 ixii[(w1, w3, w4)] += 1 iixi[(w1, w2, w4)] += 1 return cls(ixxx, iiii, ii, iii, ixi, ixxi, iixi, ixii) def score_ngram(self, score_fn, w1, w2, w3, w4): n_all = self.N n_iiii = self.ngram_fd[(w1, w2, w3, w4)] if not n_iiii: return n_iiix = self.iii[(w1, w2, w3)] n_xiii = self.iii[(w2, w3, w4)] n_iixi = self.iixi[(w1, w2, w4)] n_ixii = self.ixii[(w1, w3, w4)] n_iixx = self.ii[(w1, w2)] n_xxii = self.ii[(w3, w4)] n_xiix = self.ii[(w2, w3)] n_ixix = self.ixi[(w1, w3)] n_ixxi = self.ixxi[(w1, w4)] n_xixi = self.ixi[(w2, w4)] n_ixxx = self.word_fd[w1] n_xixx = self.word_fd[w2] n_xxix = self.word_fd[w3] n_xxxi = self.word_fd[w4] return score_fn( n_iiii, (n_iiix, n_iixi, n_ixii, n_xiii), (n_iixx, n_ixix, n_ixxi, n_xixi, n_xxii, n_xiix), (n_ixxx, n_xixx, n_xxix, n_xxxi), n_all, ) def demo(scorer=None, compare_scorer=None): from nltk.metrics import ( BigramAssocMeasures, ranks_from_scores, spearman_correlation, ) if scorer is None: scorer = BigramAssocMeasures.likelihood_ratio if compare_scorer is None: compare_scorer = BigramAssocMeasures.raw_freq from nltk.corpus import stopwords, webtext ignored_words = stopwords.words("english") word_filter = lambda w: len(w) < 3 or w.lower() in ignored_words for file in webtext.fileids(): words = [word.lower() for word in webtext.words(file)] cf = BigramCollocationFinder.from_words(words) cf.apply_freq_filter(3) cf.apply_word_filter(word_filter) corr = spearman_correlation( ranks_from_scores(cf.score_ngrams(scorer)), ranks_from_scores(cf.score_ngrams(compare_scorer)), ) print(file) print("\t", [" ".join(tup) for tup in cf.nbest(scorer, 15)]) print(f"\t Correlation to {compare_scorer.__name__}: {corr:0.4f}") if __name__ == "__main__": import sys from nltk.metrics import BigramAssocMeasures try: scorer = eval("BigramAssocMeasures." + sys.argv[1]) except IndexError: scorer = None try: compare_scorer = eval("BigramAssocMeasures." + sys.argv[2]) except IndexError: compare_scorer = None demo(scorer, compare_scorer) __all__ = [ "BigramCollocationFinder", "TrigramCollocationFinder", "QuadgramCollocationFinder", ]
natural language toolkit compatibility c 20012023 nltk project url https www nltk org for license information see license txt compatibility for datasets that care about python versions the following datasets have a py3 subdirectory containing a full copy of the data which has been reencoded or repickled for use in adding py3 to the second filename argument of the file pointers in data py natural language toolkit compatibility c 2001 2023 nltk project url https www nltk org for license information see license txt compatibility for datasets that care about python versions the following datasets have a py3 subdirectory containing a full copy of the data which has been re encoded or repickled for use in adding py3 to the second filename argument of the file pointers in data py
import os from functools import wraps DATA_UPDATES = [ ("chunkers", "maxent_ne_chunker"), ("help", "tagsets"), ("taggers", "maxent_treebank_pos_tagger"), ("tokenizers", "punkt"), ] _PY3_DATA_UPDATES = [os.path.join(*path_list) for path_list in DATA_UPDATES] def add_py3_data(path): for item in _PY3_DATA_UPDATES: if item in str(path) and "/PY3" not in str(path): pos = path.index(item) + len(item) if path[pos : pos + 4] == ".zip": pos += 4 path = path[:pos] + "/PY3" + path[pos:] break return path def py3_data(init_func): def _decorator(*args, **kwargs): args = (args[0], add_py3_data(args[1])) + args[2:] return init_func(*args, **kwargs) return wraps(init_func)(_decorator)
natural language toolkit europarl corpus readers c 20012023 nltk project nitin madnani nmadnaniumiacs umd edu url https www nltk org for license information see license txt create a new corpus reader instance for each european language natural language toolkit europarl corpus readers c 2001 2023 nltk project nitin madnani nmadnani umiacs umd edu url https www nltk org for license information see license txt create a new corpus reader instance for each european language
import re from nltk.corpus.reader import * from nltk.corpus.util import LazyCorpusLoader danish: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/danish", EuroparlCorpusReader, r"ep-.*\.da", encoding="utf-8" ) dutch: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/dutch", EuroparlCorpusReader, r"ep-.*\.nl", encoding="utf-8" ) english: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/english", EuroparlCorpusReader, r"ep-.*\.en", encoding="utf-8" ) finnish: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/finnish", EuroparlCorpusReader, r"ep-.*\.fi", encoding="utf-8" ) french: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/french", EuroparlCorpusReader, r"ep-.*\.fr", encoding="utf-8" ) german: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/german", EuroparlCorpusReader, r"ep-.*\.de", encoding="utf-8" ) greek: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/greek", EuroparlCorpusReader, r"ep-.*\.el", encoding="utf-8" ) italian: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/italian", EuroparlCorpusReader, r"ep-.*\.it", encoding="utf-8" ) portuguese: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/portuguese", EuroparlCorpusReader, r"ep-.*\.pt", encoding="utf-8" ) spanish: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/spanish", EuroparlCorpusReader, r"ep-.*\.es", encoding="utf-8" ) swedish: EuroparlCorpusReader = LazyCorpusLoader( "europarl_raw/swedish", EuroparlCorpusReader, r"ep-.*\.sv", encoding="utf-8" )
natural language toolkit corpus readers c 20012023 nltk project steven bird stevenbird1gmail com edward loper edlopergmail com url https www nltk org for license information see license txt nltk corpus readers the modules in this package provide functions that can be used to read corpus fileids in a variety of formats these functions can be used to read both the corpus fileids that are distributed in the nltk corpus package and corpus fileids that are part of external corpora corpus reader functions each corpus module defines one or more corpus reader functions which can be used to read documents from that corpus these functions take an argument item which is used to indicate which document should be read from the corpus if item is one of the unique identifiers listed in the corpus module s items variable then the corresponding document will be loaded from the nltk corpus package if item is a fileid then that file will be read additionally corpus reader functions can be given lists of item names in which case they will return a concatenation of the corresponding documents corpus reader functions are named based on the type of information they return some common examples and their return types are words list of str sents list of list of str paras list of list of list of str taggedwords list of str str tuple taggedsents list of list of str str taggedparas list of list of list of str str chunkedsents list of tree w str str leaves parsedsents list of tree with str leaves parsedparas list of list of tree with str leaves xml a single xml elementtree raw unprocessed corpus contents for example to read a list of the words in the brown corpus use nltk corpus brown words from nltk corpus import brown print joinbrown words 6 only first 6 words the fulton county grand jury said isort skipfile make sure that nltk corpus reader bracketparse gives the module not the function bracketparse defined in nltk tree natural language toolkit corpus readers c 2001 2023 nltk project steven bird stevenbird1 gmail com edward loper edloper gmail com url https www nltk org for license information see license txt nltk corpus readers the modules in this package provide functions that can be used to read corpus fileids in a variety of formats these functions can be used to read both the corpus fileids that are distributed in the nltk corpus package and corpus fileids that are part of external corpora corpus reader functions each corpus module defines one or more corpus reader functions which can be used to read documents from that corpus these functions take an argument item which is used to indicate which document should be read from the corpus if item is one of the unique identifiers listed in the corpus module s items variable then the corresponding document will be loaded from the nltk corpus package if item is a fileid then that file will be read additionally corpus reader functions can be given lists of item names in which case they will return a concatenation of the corresponding documents corpus reader functions are named based on the type of information they return some common examples and their return types are words list of str sents list of list of str paras list of list of list of str tagged_words list of str str tuple tagged_sents list of list of str str tagged_paras list of list of list of str str chunked_sents list of tree w str str leaves parsed_sents list of tree with str leaves parsed_paras list of list of tree with str leaves xml a single xml elementtree raw unprocessed corpus contents for example to read a list of the words in the brown corpus use nltk corpus brown words from nltk corpus import brown print join brown words 6 only first 6 words the fulton county grand jury said isort skip_file make sure that nltk corpus reader bracket_parse gives the module not the function bracket_parse defined in nltk tree
from nltk.corpus.reader.plaintext import * from nltk.corpus.reader.util import * from nltk.corpus.reader.api import * from nltk.corpus.reader.tagged import * from nltk.corpus.reader.cmudict import * from nltk.corpus.reader.conll import * from nltk.corpus.reader.chunked import * from nltk.corpus.reader.wordlist import * from nltk.corpus.reader.xmldocs import * from nltk.corpus.reader.ppattach import * from nltk.corpus.reader.senseval import * from nltk.corpus.reader.ieer import * from nltk.corpus.reader.sinica_treebank import * from nltk.corpus.reader.bracket_parse import * from nltk.corpus.reader.indian import * from nltk.corpus.reader.toolbox import * from nltk.corpus.reader.timit import * from nltk.corpus.reader.ycoe import * from nltk.corpus.reader.rte import * from nltk.corpus.reader.string_category import * from nltk.corpus.reader.propbank import * from nltk.corpus.reader.verbnet import * from nltk.corpus.reader.bnc import * from nltk.corpus.reader.nps_chat import * from nltk.corpus.reader.wordnet import * from nltk.corpus.reader.switchboard import * from nltk.corpus.reader.dependency import * from nltk.corpus.reader.nombank import * from nltk.corpus.reader.ipipan import * from nltk.corpus.reader.pl196x import * from nltk.corpus.reader.knbc import * from nltk.corpus.reader.chasen import * from nltk.corpus.reader.childes import * from nltk.corpus.reader.aligned import * from nltk.corpus.reader.lin import * from nltk.corpus.reader.semcor import * from nltk.corpus.reader.framenet import * from nltk.corpus.reader.udhr import * from nltk.corpus.reader.bnc import * from nltk.corpus.reader.sentiwordnet import * from nltk.corpus.reader.twitter import * from nltk.corpus.reader.nkjp import * from nltk.corpus.reader.crubadan import * from nltk.corpus.reader.mte import * from nltk.corpus.reader.reviews import * from nltk.corpus.reader.opinion_lexicon import * from nltk.corpus.reader.pros_cons import * from nltk.corpus.reader.categorized_sents import * from nltk.corpus.reader.comparative_sents import * from nltk.corpus.reader.panlex_lite import * from nltk.corpus.reader.panlex_swadesh import * from nltk.corpus.reader.bcp47 import * from nltk.corpus.reader import bracket_parse __all__ = [ "CorpusReader", "CategorizedCorpusReader", "PlaintextCorpusReader", "find_corpus_fileids", "TaggedCorpusReader", "CMUDictCorpusReader", "ConllChunkCorpusReader", "WordListCorpusReader", "PPAttachmentCorpusReader", "SensevalCorpusReader", "IEERCorpusReader", "ChunkedCorpusReader", "SinicaTreebankCorpusReader", "BracketParseCorpusReader", "IndianCorpusReader", "ToolboxCorpusReader", "TimitCorpusReader", "YCOECorpusReader", "MacMorphoCorpusReader", "SyntaxCorpusReader", "AlpinoCorpusReader", "RTECorpusReader", "StringCategoryCorpusReader", "EuroparlCorpusReader", "CategorizedBracketParseCorpusReader", "CategorizedTaggedCorpusReader", "CategorizedPlaintextCorpusReader", "PortugueseCategorizedPlaintextCorpusReader", "tagged_treebank_para_block_reader", "PropbankCorpusReader", "VerbnetCorpusReader", "BNCCorpusReader", "ConllCorpusReader", "XMLCorpusReader", "NPSChatCorpusReader", "SwadeshCorpusReader", "WordNetCorpusReader", "WordNetICCorpusReader", "SwitchboardCorpusReader", "DependencyCorpusReader", "NombankCorpusReader", "IPIPANCorpusReader", "Pl196xCorpusReader", "TEICorpusView", "KNBCorpusReader", "ChasenCorpusReader", "CHILDESCorpusReader", "AlignedCorpusReader", "TimitTaggedCorpusReader", "LinThesaurusCorpusReader", "SemcorCorpusReader", "FramenetCorpusReader", "UdhrCorpusReader", "BNCCorpusReader", "SentiWordNetCorpusReader", "SentiSynset", "TwitterCorpusReader", "NKJPCorpusReader", "CrubadanCorpusReader", "MTECorpusReader", "ReviewsCorpusReader", "OpinionLexiconCorpusReader", "ProsConsCorpusReader", "CategorizedSentencesCorpusReader", "ComparativeSentencesCorpusReader", "PanLexLiteCorpusReader", "NonbreakingPrefixesCorpusReader", "UnicharsCorpusReader", "MWAPPDBCorpusReader", "PanlexSwadeshCorpusReader", "BCP47CorpusReader", ]
natural language toolkit aligned corpus reader c 20012023 nltk project url https www nltk org steven bird stevenbird1gmail com for license information see license txt reader for corpora of wordaligned sentences tokens are assumed to be separated by whitespace sentences begin on separate lines construct a new aligned corpus reader for a set of documents located at the given root directory example usage root path to corpus reader alignedcorpusreaderroot txt doctest skip param root the root directory for this corpus param fileids a list or regexp specifying the fileids in this corpus return the given files as a list of words and punctuation symbols rtype liststr return the given files as a list of sentences or utterances each encoded as a list of word strings rtype listliststr return the given files as a list of alignedsent objects rtype listalignedsent a specialized corpus view for aligned sentences alignedsentcorpusview objects are typically alignedcorpusreader not directly by nltk users natural language toolkit aligned corpus reader c 2001 2023 nltk project url https www nltk org steven bird stevenbird1 gmail com for license information see license txt reader for corpora of word aligned sentences tokens are assumed to be separated by whitespace sentences begin on separate lines construct a new aligned corpus reader for a set of documents located at the given root directory example usage root path to corpus reader alignedcorpusreader root txt doctest skip param root the root directory for this corpus param fileids a list or regexp specifying the fileids in this corpus return the given file s as a list of words and punctuation symbols rtype list str return the given file s as a list of sentences or utterances each encoded as a list of word strings rtype list list str return the given file s as a list of alignedsent objects rtype list alignedsent a specialized corpus view for aligned sentences alignedsentcorpusview objects are typically alignedcorpusreader not directly by nltk users kludge we shouldn t have tokenized the alignment string
from nltk.corpus.reader.api import CorpusReader from nltk.corpus.reader.util import ( StreamBackedCorpusView, concat, read_alignedsent_block, ) from nltk.tokenize import RegexpTokenizer, WhitespaceTokenizer from nltk.translate import AlignedSent, Alignment class AlignedCorpusReader(CorpusReader): def __init__( self, root, fileids, sep="/", word_tokenizer=WhitespaceTokenizer(), sent_tokenizer=RegexpTokenizer("\n", gaps=True), alignedsent_block_reader=read_alignedsent_block, encoding="latin1", ): CorpusReader.__init__(self, root, fileids, encoding) self._sep = sep self._word_tokenizer = word_tokenizer self._sent_tokenizer = sent_tokenizer self._alignedsent_block_reader = alignedsent_block_reader def words(self, fileids=None): return concat( [ AlignedSentCorpusView( fileid, enc, False, False, self._word_tokenizer, self._sent_tokenizer, self._alignedsent_block_reader, ) for (fileid, enc) in self.abspaths(fileids, True) ] ) def sents(self, fileids=None): return concat( [ AlignedSentCorpusView( fileid, enc, False, True, self._word_tokenizer, self._sent_tokenizer, self._alignedsent_block_reader, ) for (fileid, enc) in self.abspaths(fileids, True) ] ) def aligned_sents(self, fileids=None): return concat( [ AlignedSentCorpusView( fileid, enc, True, True, self._word_tokenizer, self._sent_tokenizer, self._alignedsent_block_reader, ) for (fileid, enc) in self.abspaths(fileids, True) ] ) class AlignedSentCorpusView(StreamBackedCorpusView): def __init__( self, corpus_file, encoding, aligned, group_by_sent, word_tokenizer, sent_tokenizer, alignedsent_block_reader, ): self._aligned = aligned self._group_by_sent = group_by_sent self._word_tokenizer = word_tokenizer self._sent_tokenizer = sent_tokenizer self._alignedsent_block_reader = alignedsent_block_reader StreamBackedCorpusView.__init__(self, corpus_file, encoding=encoding) def read_block(self, stream): block = [ self._word_tokenizer.tokenize(sent_str) for alignedsent_str in self._alignedsent_block_reader(stream) for sent_str in self._sent_tokenizer.tokenize(alignedsent_str) ] if self._aligned: block[2] = Alignment.fromstring( " ".join(block[2]) ) block = [AlignedSent(*block)] elif self._group_by_sent: block = [block[0]] else: block = block[0] return block
natural language toolkit bcp47 language tags c 20222023 nltk project eric kafe kafe ericgmail com url https www nltk org for license information see license txt parse bcp47 composite language tags supports all the main subtags and the usd extension from nltk corpus import bcp47 bcp47 name ocgasconusdfr64 occitan post 1500 gascon pyrnesatlantiques can load a conversion table to wikidata qcodes bcp47 loadwikiq bcp47 wikiq engispanglis q79388 read the bcp47 database super initroot fileids self langcode with self openianalanguagesubtagregistry txt as fp self db self datadictfp read splitn with self opencldrcommonsubdivisionsen xml as fp self subdiv self subdivdict et parsefp iterfindlocaledisplaynamessubdivisionssubdivision self morphology def loadwikiqself convert wikidata list of qcodes to a bcp47 dictionary return pair1 pair0 split1 for pair in line strip splitt for line in lines def subdivdictself subdivs convert the bcp47 language subtag registry to a dictionary self version records0 replacefiledate strip dic dicdeprecated for label in language extlang script region variant redundant grandfathered dicdeprecatedlabel for record in records1 fields field split for field in record strip splitn typ fields01 tag fields11 if typ not in dic dictyp subfields for field in fields2 if lenfield 2 key val field if key not in subfields subfieldskey val else multiple value subfieldskey appendval else multiline field subfieldskey1 field0 strip if deprecated not in record and typ language and key description self langcodesubfieldskey1 tag for key in subfields if lensubfieldskey 1 single value subfieldskey subfieldskey0 if deprecated in record dicdeprecatedtyptag subfields else dictyptag subfields return dic def val2strself val val joinval concatenate all values concatenate subtag values name flgrecord language for label in extlang script region variant extension if label in lgrecord name f lgrecordlabel return name def parsetagself tag convert a bcp47 tag to a colonseparated string of subtag names from nltk corpus import bcp47 bcp47 name calatnesvalencia catalan latin spain valencian natural language toolkit bcp 47 language tags c 2022 2023 nltk project eric kafe kafe eric gmail com url https www nltk org for license information see license txt parse bcp 47 composite language tags supports all the main subtags and the u sd extension from nltk corpus import bcp47 bcp47 name oc gascon u sd fr64 occitan post 1500 gascon pyrénées atlantiques can load a conversion table to wikidata q codes bcp47 load_wiki_q bcp47 wiki_q en gi spanglis q79388 read the bcp 47 database load conversion table to wikidata q codes only if needed convert wikidata list of q codes to a bcp 47 dictionary convert the cldr subdivisions list to a dictionary convert the bcp 47 language subtag registry to a dictionary multiple value multiline field single value return only first value val join val concatenate all values concatenate subtag values convert a bcp 47 tag to a dictionary of labelled subtags cldr regional subdivisions other extension subtags are not supported yet convert a bcp 47 tag to a colon separated string of subtag names from nltk corpus import bcp47 bcp47 name ca latn es valencia catalan latin spain valencian
import re from warnings import warn from xml.etree import ElementTree as et from nltk.corpus.reader import CorpusReader class BCP47CorpusReader(CorpusReader): def __init__(self, root, fileids): super().__init__(root, fileids) self.langcode = {} with self.open("iana/language-subtag-registry.txt") as fp: self.db = self.data_dict(fp.read().split("%%\n")) with self.open("cldr/common-subdivisions-en.xml") as fp: self.subdiv = self.subdiv_dict( et.parse(fp).iterfind("localeDisplayNames/subdivisions/subdivision") ) self.morphology() def load_wiki_q(self): with self.open("cldr/tools-cldr-rdf-external-entityToCode.tsv") as fp: self.wiki_q = self.wiki_dict(fp.read().strip().split("\n")[1:]) def wiki_dict(self, lines): return { pair[1]: pair[0].split("/")[-1] for pair in [line.strip().split("\t") for line in lines] } def subdiv_dict(self, subdivs): return {sub.attrib["type"]: sub.text for sub in subdivs} def morphology(self): self.casing = { "language": str.lower, "extlang": str.lower, "script": str.title, "region": str.upper, "variant": str.lower, } dig = "[0-9]" low = "[a-z]" up = "[A-Z]" alnum = "[a-zA-Z0-9]" self.format = { "language": re.compile(f"{low*3}?"), "extlang": re.compile(f"{low*3}"), "script": re.compile(f"{up}{low*3}"), "region": re.compile(f"({up*2})|({dig*3})"), "variant": re.compile(f"{alnum*4}{(alnum+'?')*4}"), "singleton": re.compile(f"{low}"), } def data_dict(self, records): self.version = records[0].replace("File-Date:", "").strip() dic = {} dic["deprecated"] = {} for label in [ "language", "extlang", "script", "region", "variant", "redundant", "grandfathered", ]: dic["deprecated"][label] = {} for record in records[1:]: fields = [field.split(": ") for field in record.strip().split("\n")] typ = fields[0][1] tag = fields[1][1] if typ not in dic: dic[typ] = {} subfields = {} for field in fields[2:]: if len(field) == 2: [key, val] = field if key not in subfields: subfields[key] = [val] else: subfields[key].append(val) else: subfields[key][-1] += " " + field[0].strip() if ( "Deprecated" not in record and typ == "language" and key == "Description" ): self.langcode[subfields[key][-1]] = tag for key in subfields: if len(subfields[key]) == 1: subfields[key] = subfields[key][0] if "Deprecated" in record: dic["deprecated"][typ][tag] = subfields else: dic[typ][tag] = subfields return dic def val2str(self, val): if type(val) == list: val = val[0] return val def lang2str(self, lg_record): name = f"{lg_record['language']}" for label in ["extlang", "script", "region", "variant", "extension"]: if label in lg_record: name += f": {lg_record[label]}" return name def parse_tag(self, tag): subtags = tag.split("-") lang = {} labels = ["language", "extlang", "script", "region", "variant", "variant"] while subtags and labels: subtag = subtags.pop(0) found = False while labels: label = labels.pop(0) subtag = self.casing[label](subtag) if self.format[label].fullmatch(subtag): if subtag in self.db[label]: found = True valstr = self.val2str(self.db[label][subtag]["Description"]) if label == "variant" and label in lang: lang[label] += ": " + valstr else: lang[label] = valstr break elif subtag in self.db["deprecated"][label]: found = True note = f"The {subtag!r} {label} code is deprecated" if "Preferred-Value" in self.db["deprecated"][label][subtag]: prefer = self.db["deprecated"][label][subtag][ "Preferred-Value" ] note += f"', prefer '{self.val2str(prefer)}'" lang[label] = self.val2str( self.db["deprecated"][label][subtag]["Description"] ) warn(note) break if not found: if subtag == "u" and subtags[0] == "sd": sd = subtags[1] if sd in self.subdiv: ext = self.subdiv[sd] else: ext = f"<Unknown subdivision: {ext}>" else: ext = f"{subtag}{''.join(['-'+ext for ext in subtags])}".lower() if not self.format["singleton"].fullmatch(subtag): ext = f"<Invalid extension: {ext}>" warn(ext) lang["extension"] = ext subtags = [] return lang def name(self, tag): for label in ["redundant", "grandfathered"]: val = None if tag in self.db[label]: val = f"{self.db[label][tag]['Description']}" note = f"The {tag!r} code is {label}" elif tag in self.db["deprecated"][label]: val = f"{self.db['deprecated'][label][tag]['Description']}" note = f"The {tag!r} code is {label} and deprecated" if "Preferred-Value" in self.db["deprecated"][label][tag]: prefer = self.db["deprecated"][label][tag]["Preferred-Value"] note += f", prefer {self.val2str(prefer)!r}" if val: warn(note) return val try: return self.lang2str(self.parse_tag(tag)) except: warn(f"Tag {tag!r} was not recognized") return None
natural language toolkit plaintext corpus reader c 20012023 nltk project edward loper edlopergmail com url https www nltk org for license information see license txt corpus reader for the xml version of the british national corpus from nltk corpus reader util import concat from nltk corpus reader xmldocs import elementtree xmlcorpusreader xmlcorpusview class bnccorpusreaderxmlcorpusreader rcorpus reader for the xml version of the british national corpus for access to the complete xml data structure use the xml method for access to simple word lists and tagged word lists use words sents taggedwords and taggedsents you can obtain the full version of the bnc corpus at https www ota ox ac ukdesc2554 if you extracted the archive to a directory called bnc then you can instantiate the reader as bnccorpusreaderroot bnctexts fileidsr akww xml return the given files as a list of words and punctuation symbols rtype liststr param stripspace if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param stem if true then use word stems instead of word strings return the given files as a list of tagged words and punctuation symbols encoded as tuples word tag rtype listtuplestr str param c5 if true then the tags used will be the more detailed c5 tags otherwise the simplified tags will be used param stripspace if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param stem if true then use word stems instead of word strings return the given files as a list of sentences or utterances each encoded as a list of word strings rtype listliststr param stripspace if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param stem if true then use word stems instead of word strings return the given files as a list of sentences each encoded as a list of word tag tuples rtype listlisttuplestr str param c5 if true then the tags used will be the more detailed c5 tags otherwise the simplified tags will be used param stripspace if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param stem if true then use word stems instead of word strings a helper function that instantiates bncwordviews or the list of wordssentences f bncwordview if self lazy else self words return concat ffileid sent tag stripspace stem for fileid in self abspathsfileids def wordsself fileid bracketsent tag stripspace stem result xmldoc elementtree parsefileid getroot for xmlsent in xmldoc findall s sent for xmlword in allxmlwordsinxmlsent word xmlword text if not word word fixes issue 337 if stripspace or stem word word strip if stem word xmlword gethw word if tag c5 word word xmlword getc5 elif tag pos word word xmlword getpos xmlword getc5 sent appendword if bracketsent result appendbncsentencexmlsent attribn sent else result extendsent assert none not in result return result def allxmlwordsinelt resultnone if result is none result for child in elt if child tag in c w result appendchild else allxmlwordsinchild result return result class bncsentencelist def initself num items self num num list initself items class bncwordviewxmlcorpusview tagstoignore pb gap vocal event unclear shift pause align def initself fileid sent tag stripspace stem if sent tagspec s else tagspec s cw self sent sent self tag tag self stripspace stripspace self stem stem self title none title of the document self none of the document self editor none editor self resps none statement of responsibility xmlcorpusview initself fileid tagspec read in a tasty header self open self readblockself stream teiheader self handleheader self close reset tag context self tagcontext 0 def handleheaderself elt context set up some metadata titles elt findalltitlestmttitle if titles self title n jointitle text strip for title in titles s elt findalltitlestmt if s self n join text strip for in s editors elt findalltitlestmteditor if editors self editor n joineditor text strip for editor in editors resps elt findalltitlestmtrespstmt if resps self resps nn join n joinrespelt text strip for respelt in resp for resp in resps def handleeltself elt context if self sent return self handlesentelt else return self handlewordelt def handlewordself elt word elt text if not word word fixes issue 337 if self stripspace or self stem word word strip if self stem word elt gethw word if self tag c5 word word elt getc5 elif self tag pos word word elt getpos elt getc5 return word def handlesentself elt sent for child in elt if child tag in mw hi corr trunc sent self handlewordw for w in child elif child tag in w c sent appendself handlewordchild elif child tag not in self tagstoignore raise valueerrorunexpected element s child tag return bncsentenceelt attribn sent natural language toolkit plaintext corpus reader c 2001 2023 nltk project edward loper edloper gmail com url https www nltk org for license information see license txt corpus reader for the xml version of the british national corpus corpus reader for the xml version of the british national corpus for access to the complete xml data structure use the xml method for access to simple word lists and tagged word lists use words sents tagged_words and tagged_sents you can obtain the full version of the bnc corpus at https www ota ox ac uk desc 2554 if you extracted the archive to a directory called bnc then you can instantiate the reader as bnccorpusreader root bnc texts fileids r a k w w xml return the given file s as a list of words and punctuation symbols rtype list str param strip_space if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param stem if true then use word stems instead of word strings return the given file s as a list of tagged words and punctuation symbols encoded as tuples word tag rtype list tuple str str param c5 if true then the tags used will be the more detailed c5 tags otherwise the simplified tags will be used param strip_space if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param stem if true then use word stems instead of word strings return the given file s as a list of sentences or utterances each encoded as a list of word strings rtype list list str param strip_space if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param stem if true then use word stems instead of word strings return the given file s as a list of sentences each encoded as a list of word tag tuples rtype list list tuple str str param c5 if true then the tags used will be the more detailed c5 tags otherwise the simplified tags will be used param strip_space if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param stem if true then use word stems instead of word strings a helper function that instantiates bncwordviews or the list of words sentences helper used to implement the view methods returns a list of words or a list of sentences optionally tagged param fileid the name of the underlying file param bracket_sent if true include sentence bracketing param tag the name of the tagset to use or none for no tags param strip_space if true strip spaces from word tokens param stem if true then substitute stems for words fixes issue 337 a list of words augmented by an attribute num used to record the sentence identifier the n attribute from the xml a stream backed corpus view specialized for use with the bnc corpus these tags are ignored for their description refer to the technical documentation for example http www natcorp ox ac uk docs urg ref vocal html param fileid the name of the underlying file param sent if true include sentence bracketing param tag the name of the tagset to use or none for no tags param strip_space if true strip spaces from word tokens param stem if true then substitute stems for words title of the document of the document editor statement of responsibility read in a tasty header reset tag context set up some metadata fixes issue 337
from nltk.corpus.reader.util import concat from nltk.corpus.reader.xmldocs import ElementTree, XMLCorpusReader, XMLCorpusView class BNCCorpusReader(XMLCorpusReader): r def __init__(self, root, fileids, lazy=True): XMLCorpusReader.__init__(self, root, fileids) self._lazy = lazy def words(self, fileids=None, strip_space=True, stem=False): return self._views(fileids, False, None, strip_space, stem) def tagged_words(self, fileids=None, c5=False, strip_space=True, stem=False): tag = "c5" if c5 else "pos" return self._views(fileids, False, tag, strip_space, stem) def sents(self, fileids=None, strip_space=True, stem=False): return self._views(fileids, True, None, strip_space, stem) def tagged_sents(self, fileids=None, c5=False, strip_space=True, stem=False): tag = "c5" if c5 else "pos" return self._views( fileids, sent=True, tag=tag, strip_space=strip_space, stem=stem ) def _views(self, fileids=None, sent=False, tag=False, strip_space=True, stem=False): f = BNCWordView if self._lazy else self._words return concat( [ f(fileid, sent, tag, strip_space, stem) for fileid in self.abspaths(fileids) ] ) def _words(self, fileid, bracket_sent, tag, strip_space, stem): result = [] xmldoc = ElementTree.parse(fileid).getroot() for xmlsent in xmldoc.findall(".//s"): sent = [] for xmlword in _all_xmlwords_in(xmlsent): word = xmlword.text if not word: word = "" if strip_space or stem: word = word.strip() if stem: word = xmlword.get("hw", word) if tag == "c5": word = (word, xmlword.get("c5")) elif tag == "pos": word = (word, xmlword.get("pos", xmlword.get("c5"))) sent.append(word) if bracket_sent: result.append(BNCSentence(xmlsent.attrib["n"], sent)) else: result.extend(sent) assert None not in result return result def _all_xmlwords_in(elt, result=None): if result is None: result = [] for child in elt: if child.tag in ("c", "w"): result.append(child) else: _all_xmlwords_in(child, result) return result class BNCSentence(list): def __init__(self, num, items): self.num = num list.__init__(self, items) class BNCWordView(XMLCorpusView): tags_to_ignore = { "pb", "gap", "vocal", "event", "unclear", "shift", "pause", "align", } def __init__(self, fileid, sent, tag, strip_space, stem): if sent: tagspec = ".*/s" else: tagspec = ".*/s/(.*/)?(c|w)" self._sent = sent self._tag = tag self._strip_space = strip_space self._stem = stem self.title = None self.author = None self.editor = None self.resps = None XMLCorpusView.__init__(self, fileid, tagspec) self._open() self.read_block(self._stream, ".*/teiHeader$", self.handle_header) self.close() self._tag_context = {0: ()} def handle_header(self, elt, context): titles = elt.findall("titleStmt/title") if titles: self.title = "\n".join(title.text.strip() for title in titles) authors = elt.findall("titleStmt/author") if authors: self.author = "\n".join(author.text.strip() for author in authors) editors = elt.findall("titleStmt/editor") if editors: self.editor = "\n".join(editor.text.strip() for editor in editors) resps = elt.findall("titleStmt/respStmt") if resps: self.resps = "\n\n".join( "\n".join(resp_elt.text.strip() for resp_elt in resp) for resp in resps ) def handle_elt(self, elt, context): if self._sent: return self.handle_sent(elt) else: return self.handle_word(elt) def handle_word(self, elt): word = elt.text if not word: word = "" if self._strip_space or self._stem: word = word.strip() if self._stem: word = elt.get("hw", word) if self._tag == "c5": word = (word, elt.get("c5")) elif self._tag == "pos": word = (word, elt.get("pos", elt.get("c5"))) return word def handle_sent(self, elt): sent = [] for child in elt: if child.tag in ("mw", "hi", "corr", "trunc"): sent += [self.handle_word(w) for w in child] elif child.tag in ("w", "c"): sent.append(self.handle_word(child)) elif child.tag not in self.tags_to_ignore: raise ValueError("Unexpected element %s" % child.tag) return BNCSentence(elt.attrib["n"], sent)
natural language toolkit penn treebank reader c 20012023 nltk project steven bird stevenbird1gmail com edward loper edlopergmail com url https www nltk org for license information see license txt corpus reader for corpora that consist of parenthesisdelineated parse trees we use s instead of s to avoid matching reader for corpora that consist of parenthesisdelineated parse trees like those found in the combined section of the penn treebank e g s np dt the jj little nn dog vp vbd barked param root the root directory for this corpus param fileids a list or regexp specifying the fileids in this corpus param commentchar the character which can appear at the start of a line to indicate that the rest of the line is a comment param detectblocks the method that is used to find blocks in the corpus can be unindentedparen every unindented parenthesis starts a new parse or sexpr brackets are matched param tagset the name of the tagset used by this corpus to be used for normalizing or converting the pos tags returned by the tagged methods tokens start with unindented left parens strip any comments out of the tokens replace leaves of the form with replace leaves of the form tag word root with tag word if there s an empty node at the top strip it off try to recover if we can try something else sys stderr write joint split n a reader for parsed corpora whose documents are divided into categories based on their file identifiers nathan schneider nschneidcs cmu edu initialize the corpus reader categorization arguments ccatpattern ccatmap and ccatfile are passed to the lcategorizedcorpusreader constructor categorizedcorpusreader init the remaining arguments are passed to the lbracketparsecorpusreader constructor bracketparsecorpusreader init reader for the alpino dutch treebank this corpus has a lexical breakdown structure embedded as read by parse unfortunately this puts punctuation and some other words out of the sentence order in the xml element tree this is no good for tag and word tag and word will be overridden to use a nondefault new parameter ordered to the overridden normalize function the parse function can then remain untouched normalize the xml sentence element in t the sentence elements alpinods although embedded in a few overall xml elements are separated by blank lines that s how the reader can deliver them one at a time each sentence has a few category subnodes that are of no use to us the remaining word nodes may or may not appear in the proper order each word node has attributes among which begin the position of the word in the sentence pos part of speech the tag word the actual word the return value is a string with all xml elementes replaced by clauses either a cat clause with nested clauses or a word clause the order of the bracket clauses closely follows the xml if ordered true the word clauses include an order sequence number if ordered false the word clauses only have pos and word parts convert xml to sexpr notation return a correctly ordered list if words taggedsent self tagt return w for w p in taggedsent natural language toolkit penn treebank reader c 2001 2023 nltk project steven bird stevenbird1 gmail com edward loper edloper gmail com url https www nltk org for license information see license txt corpus reader for corpora that consist of parenthesis delineated parse trees we use s instead of s to avoid matching reader for corpora that consist of parenthesis delineated parse trees like those found in the combined section of the penn treebank e g s np dt the jj little nn dog vp vbd barked param root the root directory for this corpus param fileids a list or regexp specifying the fileids in this corpus param comment_char the character which can appear at the start of a line to indicate that the rest of the line is a comment param detect_blocks the method that is used to find blocks in the corpus can be unindented_paren every unindented parenthesis starts a new parse or sexpr brackets are matched param tagset the name of the tagset used by this corpus to be used for normalizing or converting the pos tags returned by the tagged_ methods tokens start with unindented left parens strip any comments out of the tokens replace leaves of the form with replace leaves of the form tag word root with tag word if there s an empty node at the top strip it off try to recover if we can try something else sys stderr write join t split n a reader for parsed corpora whose documents are divided into categories based on their file identifiers nathan schneider nschneid cs cmu edu initialize the corpus reader categorization arguments c cat_pattern c cat_map and c cat_file are passed to the l categorizedcorpusreader constructor categorizedcorpusreader __init__ the remaining arguments are passed to the l bracketparsecorpusreader constructor bracketparsecorpusreader __init__ reader for the alpino dutch treebank this corpus has a lexical breakdown structure embedded as read by _parse unfortunately this puts punctuation and some other words out of the sentence order in the xml element tree this is no good for tag_ and word_ _tag and _word will be overridden to use a non default new parameter ordered to the overridden _normalize function the _parse function can then remain untouched normalize the xml sentence element in t the sentence elements alpino_ds although embedded in a few overall xml elements are separated by blank lines that s how the reader can deliver them one at a time each sentence has a few category subnodes that are of no use to us the remaining word nodes may or may not appear in the proper order each word node has attributes among which begin the position of the word in the sentence pos part of speech the tag word the actual word the return value is a string with all xml elementes replaced by clauses either a cat clause with nested clauses or a word clause the order of the bracket clauses closely follows the xml if ordered true the word clauses include an order sequence number if ordered false the word clauses only have pos and word parts convert xml to sexpr notation return a correctly ordered list if words
import sys from nltk.corpus.reader.api import * from nltk.corpus.reader.util import * from nltk.tag import map_tag from nltk.tree import Tree SORTTAGWRD = re.compile(r"\((\d+) ([^\s()]+) ([^\s()]+)\)") TAGWORD = re.compile(r"\(([^\s()]+) ([^\s()]+)\)") WORD = re.compile(r"\([^\s()]+ ([^\s()]+)\)") EMPTY_BRACKETS = re.compile(r"\s*\(\s*\(") class BracketParseCorpusReader(SyntaxCorpusReader): def __init__( self, root, fileids, comment_char=None, detect_blocks="unindented_paren", encoding="utf8", tagset=None, ): SyntaxCorpusReader.__init__(self, root, fileids, encoding) self._comment_char = comment_char self._detect_blocks = detect_blocks self._tagset = tagset def _read_block(self, stream): if self._detect_blocks == "sexpr": return read_sexpr_block(stream, comment_char=self._comment_char) elif self._detect_blocks == "blankline": return read_blankline_block(stream) elif self._detect_blocks == "unindented_paren": toks = read_regexp_block(stream, start_re=r"^\(") if self._comment_char: toks = [ re.sub("(?m)^%s.*" % re.escape(self._comment_char), "", tok) for tok in toks ] return toks else: assert 0, "bad block type" def _normalize(self, t): t = re.sub(r"\((.)\)", r"(\1 \1)", t) t = re.sub(r"\(([^\s()]+) ([^\s()]+) [^\s()]+\)", r"(\1 \2)", t) return t def _parse(self, t): try: tree = Tree.fromstring(self._normalize(t)) if tree.label() == "" and len(tree) == 1: return tree[0] else: return tree except ValueError as e: sys.stderr.write("Bad tree detected; trying to recover...\n") if e.args == ("mismatched parens",): for n in range(1, 5): try: v = Tree(self._normalize(t + ")" * n)) sys.stderr.write( " Recovered by adding %d close " "paren(s)\n" % n ) return v except ValueError: pass sys.stderr.write(" Recovered by returning a flat parse.\n") return Tree("S", self._tag(t)) def _tag(self, t, tagset=None): tagged_sent = [(w, p) for (p, w) in TAGWORD.findall(self._normalize(t))] if tagset and tagset != self._tagset: tagged_sent = [ (w, map_tag(self._tagset, tagset, p)) for (w, p) in tagged_sent ] return tagged_sent def _word(self, t): return WORD.findall(self._normalize(t)) class CategorizedBracketParseCorpusReader( CategorizedCorpusReader, BracketParseCorpusReader ): def __init__(self, *args, **kwargs): CategorizedCorpusReader.__init__(self, kwargs) BracketParseCorpusReader.__init__(self, *args, **kwargs) def tagged_words(self, fileids=None, categories=None, tagset=None): return super().tagged_words(self._resolve(fileids, categories), tagset) def tagged_sents(self, fileids=None, categories=None, tagset=None): return super().tagged_sents(self._resolve(fileids, categories), tagset) def tagged_paras(self, fileids=None, categories=None, tagset=None): return super().tagged_paras(self._resolve(fileids, categories), tagset) def parsed_words(self, fileids=None, categories=None): return super().parsed_words(self._resolve(fileids, categories)) def parsed_sents(self, fileids=None, categories=None): return super().parsed_sents(self._resolve(fileids, categories)) def parsed_paras(self, fileids=None, categories=None): return super().parsed_paras(self._resolve(fileids, categories)) class AlpinoCorpusReader(BracketParseCorpusReader): def __init__(self, root, encoding="ISO-8859-1", tagset=None): BracketParseCorpusReader.__init__( self, root, r"alpino\.xml", detect_blocks="blankline", encoding=encoding, tagset=tagset, ) def _normalize(self, t, ordered=False): if t[:10] != "<alpino_ds": return "" t = re.sub(r' <node .*? cat="(\w+)".*>', r"(\1", t) if ordered: t = re.sub( r' <node. *?begin="(\d+)".*? pos="(\w+)".*? word="([^"]+)".*?/>', r"(\1 \2 \3)", t, ) else: t = re.sub(r' <node .*?pos="(\w+)".*? word="([^"]+)".*?/>', r"(\1 \2)", t) t = re.sub(r" </node>", r")", t) t = re.sub(r"<sentence>.*</sentence>", r"", t) t = re.sub(r"</?alpino_ds.*>", r"", t) return t def _tag(self, t, tagset=None): tagged_sent = [ (int(o), w, p) for (o, p, w) in SORTTAGWRD.findall(self._normalize(t, ordered=True)) ] tagged_sent.sort() if tagset and tagset != self._tagset: tagged_sent = [ (w, map_tag(self._tagset, tagset, p)) for (o, w, p) in tagged_sent ] else: tagged_sent = [(w, p) for (o, w, p) in tagged_sent] return tagged_sent def _word(self, t): tagged_sent = self._tag(t) return [w for (w, p) in tagged_sent]
natural language toolkit categorized sentences corpus reader c 20012023 nltk project pierpaolo pantone 24alsecondogmail com url https www nltk org for license information see license txt corpusreader structured for corpora that contain one instance on each row this corpusreader is specifically used for the subjectivity dataset and the sentence polarity dataset subjectivity dataset information s bo pang and lillian lee url https www cs cornell edupeoplepabomoviereviewdata distributed with permission related papers bo pang and lillian lee a sentimental education sentiment analysis using subjectivity summarization based on minimum cuts proceedings of the acl 2004 sentence polarity dataset information s bo pang and lillian lee url https www cs cornell edupeoplepabomoviereviewdata related papers bo pang and lillian lee seeing stars exploiting class relationships for sentiment categorization with respect to rating scales proceedings of the acl 2005 a reader for corpora in which each row represents a single instance mainly a sentence istances are divided into categories based on their file identifiers see categorizedcorpusreader since many corpora allow rows that contain more than one sentence it is possible to specify a sentence tokenizer to retrieve all sentences instead than all rows examples using the subjectivity dataset from nltk corpus import subjectivity subjectivity sents23 doctest normalizewhitespace television made him famous but his biggest hits happened off screen subjectivity categories obj subj subjectivity wordscategories subj smart and alert thirteen examples using the sentence polarity dataset from nltk corpus import sentencepolarity sentencepolarity sents doctest normalizewhitespace simplistic silly and tedious it s so laddish and juvenile only teenage boys could possibly find it funny sentencepolarity categories neg pos param root the root directory for the corpus param fileids a list or regexp specifying the fileids in the corpus param wordtokenizer a tokenizer for breaking sentences or paragraphs into words default whitespacetokenizer param senttokenizer a tokenizer for breaking paragraphs into sentences param encoding the encoding that should be used to read the corpus param kwargs additional parameters passed to categorizedcorpusreader return all sentences in the corpus or in the specified files param fileids a list or regexp specifying the ids of the files whose sentences have to be returned param categories a list specifying the categories whose sentences have to be returned return the given files as a list of sentences each sentence is tokenized using the specified wordtokenizer rtype listliststr return all words and punctuation symbols in the corpus or in the specified files param fileids a list or regexp specifying the ids of the files whose words have to be returned param categories a list specifying the categories whose words have to be returned return the given files as a list of words and punctuation symbols rtype liststr natural language toolkit categorized sentences corpus reader c 2001 2023 nltk project pierpaolo pantone 24alsecondo gmail com url https www nltk org for license information see license txt corpusreader structured for corpora that contain one instance on each row this corpusreader is specifically used for the subjectivity dataset and the sentence polarity dataset subjectivity dataset information s bo pang and lillian lee url https www cs cornell edu people pabo movie review data distributed with permission related papers bo pang and lillian lee a sentimental education sentiment analysis using subjectivity summarization based on minimum cuts proceedings of the acl 2004 sentence polarity dataset information s bo pang and lillian lee url https www cs cornell edu people pabo movie review data related papers bo pang and lillian lee seeing stars exploiting class relationships for sentiment categorization with respect to rating scales proceedings of the acl 2005 a reader for corpora in which each row represents a single instance mainly a sentence istances are divided into categories based on their file identifiers see categorizedcorpusreader since many corpora allow rows that contain more than one sentence it is possible to specify a sentence tokenizer to retrieve all sentences instead than all rows examples using the subjectivity dataset from nltk corpus import subjectivity subjectivity sents 23 doctest normalize_whitespace television made him famous but his biggest hits happened off screen subjectivity categories obj subj subjectivity words categories subj smart and alert thirteen examples using the sentence polarity dataset from nltk corpus import sentence_polarity sentence_polarity sents doctest normalize_whitespace simplistic silly and tedious it s so laddish and juvenile only teenage boys could possibly find it funny sentence_polarity categories neg pos param root the root directory for the corpus param fileids a list or regexp specifying the fileids in the corpus param word_tokenizer a tokenizer for breaking sentences or paragraphs into words default whitespacetokenizer param sent_tokenizer a tokenizer for breaking paragraphs into sentences param encoding the encoding that should be used to read the corpus param kwargs additional parameters passed to categorizedcorpusreader return all sentences in the corpus or in the specified file s param fileids a list or regexp specifying the ids of the files whose sentences have to be returned param categories a list specifying the categories whose sentences have to be returned return the given file s as a list of sentences each sentence is tokenized using the specified word_tokenizer rtype list list str return all words and punctuation symbols in the corpus or in the specified file s param fileids a list or regexp specifying the ids of the files whose words have to be returned param categories a list specifying the categories whose words have to be returned return the given file s as a list of words and punctuation symbols rtype list str read 20 lines at a time
from nltk.corpus.reader.api import * from nltk.tokenize import * class CategorizedSentencesCorpusReader(CategorizedCorpusReader, CorpusReader): CorpusView = StreamBackedCorpusView def __init__( self, root, fileids, word_tokenizer=WhitespaceTokenizer(), sent_tokenizer=None, encoding="utf8", **kwargs ): CorpusReader.__init__(self, root, fileids, encoding) CategorizedCorpusReader.__init__(self, kwargs) self._word_tokenizer = word_tokenizer self._sent_tokenizer = sent_tokenizer def sents(self, fileids=None, categories=None): fileids = self._resolve(fileids, categories) if fileids is None: fileids = self._fileids elif isinstance(fileids, str): fileids = [fileids] return concat( [ self.CorpusView(path, self._read_sent_block, encoding=enc) for (path, enc, fileid) in self.abspaths(fileids, True, True) ] ) def words(self, fileids=None, categories=None): fileids = self._resolve(fileids, categories) if fileids is None: fileids = self._fileids elif isinstance(fileids, str): fileids = [fileids] return concat( [ self.CorpusView(path, self._read_word_block, encoding=enc) for (path, enc, fileid) in self.abspaths(fileids, True, True) ] ) def _read_sent_block(self, stream): sents = [] for i in range(20): line = stream.readline() if not line: continue if self._sent_tokenizer: sents.extend( [ self._word_tokenizer.tokenize(sent) for sent in self._sent_tokenizer.tokenize(line) ] ) else: sents.append(self._word_tokenizer.tokenize(line)) return sents def _read_word_block(self, stream): words = [] for sent in self._read_sent_block(stream): words.extend(sent) return words
c 20012023 nltk project masato hagiwara hagisangmail com url https www nltk org for license information see license txt a specialized corpus view for chasenreader similar to taggedcorpusview but this ll use fixed sets of word and sentence tokenizer reads one paragraph at a time block for parastr in readregexpblockstream r reosn para sent for line in parastr splitlines eos line strip eos cells line splitt w cells0 t joincells1 if not eos sent appendw if eos or self sentsplitter and self sentsplitterw if not self tagged sent w for w t in sent if self groupbysent para appendsent else para extendsent sent if lensent 0 if not self tagged sent w for w t in sent if self groupbysent para appendsent else para extendsent if self groupbypara block appendpara else block extendpara return block def demo import nltk from nltk corpus util import lazycorpusloader jeita lazycorpusloaderjeita chasencorpusreader r chasen encodingutf8 print joinjeita words22100 22140 print neosn join n join formatw0 w1 splitt2 for w in sent for sent in jeita taggedsents2170 2173 def test from nltk corpus util import lazycorpusloader jeita lazycorpusloaderjeita chasencorpusreader r chasen encodingutf8 assert isinstancejeita taggedwords01 str if name main demo test c 2001 2023 nltk project masato hagiwara hagisan gmail com url https www nltk org for license information see license txt a specialized corpus view for chasenreader similar to taggedcorpusview but this ll use fixed sets of word and sentence tokenizer reads one paragraph at a time
import sys from nltk.corpus.reader import util from nltk.corpus.reader.api import * from nltk.corpus.reader.util import * class ChasenCorpusReader(CorpusReader): def __init__(self, root, fileids, encoding="utf8", sent_splitter=None): self._sent_splitter = sent_splitter CorpusReader.__init__(self, root, fileids, encoding) def words(self, fileids=None): return concat( [ ChasenCorpusView(fileid, enc, False, False, False, self._sent_splitter) for (fileid, enc) in self.abspaths(fileids, True) ] ) def tagged_words(self, fileids=None): return concat( [ ChasenCorpusView(fileid, enc, True, False, False, self._sent_splitter) for (fileid, enc) in self.abspaths(fileids, True) ] ) def sents(self, fileids=None): return concat( [ ChasenCorpusView(fileid, enc, False, True, False, self._sent_splitter) for (fileid, enc) in self.abspaths(fileids, True) ] ) def tagged_sents(self, fileids=None): return concat( [ ChasenCorpusView(fileid, enc, True, True, False, self._sent_splitter) for (fileid, enc) in self.abspaths(fileids, True) ] ) def paras(self, fileids=None): return concat( [ ChasenCorpusView(fileid, enc, False, True, True, self._sent_splitter) for (fileid, enc) in self.abspaths(fileids, True) ] ) def tagged_paras(self, fileids=None): return concat( [ ChasenCorpusView(fileid, enc, True, True, True, self._sent_splitter) for (fileid, enc) in self.abspaths(fileids, True) ] ) class ChasenCorpusView(StreamBackedCorpusView): def __init__( self, corpus_file, encoding, tagged, group_by_sent, group_by_para, sent_splitter=None, ): self._tagged = tagged self._group_by_sent = group_by_sent self._group_by_para = group_by_para self._sent_splitter = sent_splitter StreamBackedCorpusView.__init__(self, corpus_file, encoding=encoding) def read_block(self, stream): block = [] for para_str in read_regexp_block(stream, r".", r"^EOS\n"): para = [] sent = [] for line in para_str.splitlines(): _eos = line.strip() == "EOS" _cells = line.split("\t") w = (_cells[0], "\t".join(_cells[1:])) if not _eos: sent.append(w) if _eos or (self._sent_splitter and self._sent_splitter(w)): if not self._tagged: sent = [w for (w, t) in sent] if self._group_by_sent: para.append(sent) else: para.extend(sent) sent = [] if len(sent) > 0: if not self._tagged: sent = [w for (w, t) in sent] if self._group_by_sent: para.append(sent) else: para.extend(sent) if self._group_by_para: block.append(para) else: block.extend(para) return block def demo(): import nltk from nltk.corpus.util import LazyCorpusLoader jeita = LazyCorpusLoader("jeita", ChasenCorpusReader, r".*chasen", encoding="utf-8") print("/".join(jeita.words()[22100:22140])) print( "\nEOS\n".join( "\n".join("{}/{}".format(w[0], w[1].split("\t")[2]) for w in sent) for sent in jeita.tagged_sents()[2170:2173] ) ) def test(): from nltk.corpus.util import LazyCorpusLoader jeita = LazyCorpusLoader("jeita", ChasenCorpusReader, r".*chasen", encoding="utf-8") assert isinstance(jeita.tagged_words()[0][1], str) if __name__ == "__main__": demo() test()
childes xml corpus reader c 20012023 nltk project tomonori nagano tnaganogc cuny edu alexis dimitriadis a dimitriadisuu nl url https www nltk org for license information see license txt corpus reader for the xml version of the childes corpus to resolve the namespace issue corpus reader for the xml version of the childes corpus the childes corpus is available at https childes talkbank org the xml version of childes is located at https childes talkbank orgdataxml copy the needed parts of the childes xml corpus into the nltk data directory nltkdatacorporachildes for access to the file text use the usual nltk functions words sents taggedwords and taggedsents return the given files as a list of words rtype liststr param speaker if specified select specific speakers defined in the corpus default is all all participants common choices are chi the child mot mother chi mot exclude researchers param stem if true then use word stems instead of word strings param relation if true then return tuples of stem index dependentindex param stripspace if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param replace if true then use the replaced intended word instead of the original word e g wat will be replaced with watch return the given files as a list of tagged words and punctuation symbols encoded as tuples word tag rtype listtuplestr str param speaker if specified select specific speakers defined in the corpus default is all all participants common choices are chi the child mot mother chi mot exclude researchers param stem if true then use word stems instead of word strings param relation if true then return tuples of stem index dependentindex param stripspace if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param replace if true then use the replaced intended word instead of the original word e g wat will be replaced with watch return the given files as a list of sentences or utterances each encoded as a list of word strings rtype listliststr param speaker if specified select specific speakers defined in the corpus default is all all participants common choices are chi the child mot mother chi mot exclude researchers param stem if true then use word stems instead of word strings param relation if true then return tuples of str pos relationlist if there is manuallyannotated relation info it will return tuples of str pos testrelationlist str pos goldrelationlist param stripspace if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param replace if true then use the replaced intended word instead of the original word e g wat will be replaced with watch return the given files as a list of sentences each encoded as a list of word tag tuples rtype listlisttuplestr str param speaker if specified select specific speakers defined in the corpus default is all all participants common choices are chi the child mot mother chi mot exclude researchers param stem if true then use word stems instead of word strings param relation if true then return tuples of str pos relationlist if there is manuallyannotated relation info it will return tuples of str pos testrelationlist str pos goldrelationlist param stripspace if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param replace if true then use the replaced intended word instead of the original word e g wat will be replaced with watch return the given files as a dict of corpuspropertykey value rtype listdict return the given files as a dict of participantpropertykey value rtype listdict multidimensional dicts getting participants data return the given files as string or int rtype list or int param month if true return months instead of yearmonthdate some files don t have age data some corpora don t have age information return the given files as a floating number rtype listfloat if any part of the sentence is intelligible if the sentence is null if the sentence is the same as the last sent count number of fillers count number of morphemes e g read 1 morpheme but readpast is 2 morphemes return mlu mlu wordnum numwords sentnum numsents processing each xml doc select speakers getting replaced words get text strip tailing space stem if there is an inflection if there is a suffix pos relational the gold standard is stored in mormormor typetrngra typegrt readytouse browser opener the base url for viewing files on the childes website this shouldn t need to be changed unless childes changes the configuration of their server or unless the user sets up their own corpus webserver map a corpus file to its web version on the childes website and open it in a web browser the complete url to be used is childes childesurlbase urlbase fileid replace xml cha if no urlbase is passed we try to calculate it this requires that the childes corpus was set up to mirror the folder hierarchy under childes psy cmu edudataxml e g nltkdatacorporachildesengusacornell or nltkdatacorporachildesromancespanishaguirre the function first looks as a special case if engusa is on the path consisting of corpus rootfileid then if childes possibly followed by dataxml appears if neither one is found we use the unmodified fileid and hope for the best if this is not right specify urlbase explicitly e g if the corpus root points to the cornell folder urlbase engusacornell discard dataxml if present strip xml and add cha as necessary pausing is a good idea but it s up to the user rawinputhit return to continue the childes corpus should be manually downloaded and saved to nltkdatadircorporachildes describe all corpus the childes corpus or the parts you need should be manually downloaded from https childes talkbank orgdataxml and saved at nltkdatadircorporachildes alternately you can call the demo with the path to a portion of the childes corpus e g demo pathtochildesdataxmlengusa corpusroothttp urllib2 urlopen https childes talkbank orgdataxmlengusabates zip corpusroothttpbates zipfile zipfilecstringio stringiocorpusroothttp read this fails childes childescorpusreadercorpusroothttpbates corpusroothttpbates namelist childes xml corpus reader c 2001 2023 nltk project tomonori nagano tnagano gc cuny edu alexis dimitriadis a dimitriadis uu nl url https www nltk org for license information see license txt corpus reader for the xml version of the childes corpus to resolve the namespace issue corpus reader for the xml version of the childes corpus the childes corpus is available at https childes talkbank org the xml version of childes is located at https childes talkbank org data xml copy the needed parts of the childes xml corpus into the nltk data directory nltk_data corpora childes for access to the file text use the usual nltk functions words sents tagged_words and tagged_sents return the given file s as a list of words rtype list str param speaker if specified select specific speaker s defined in the corpus default is all all participants common choices are chi the child mot mother chi mot exclude researchers param stem if true then use word stems instead of word strings param relation if true then return tuples of stem index dependent_index param strip_space if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param replace if true then use the replaced intended word instead of the original word e g wat will be replaced with watch return the given file s as a list of tagged words and punctuation symbols encoded as tuples word tag rtype list tuple str str param speaker if specified select specific speaker s defined in the corpus default is all all participants common choices are chi the child mot mother chi mot exclude researchers param stem if true then use word stems instead of word strings param relation if true then return tuples of stem index dependent_index param strip_space if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param replace if true then use the replaced intended word instead of the original word e g wat will be replaced with watch return the given file s as a list of sentences or utterances each encoded as a list of word strings rtype list list str param speaker if specified select specific speaker s defined in the corpus default is all all participants common choices are chi the child mot mother chi mot exclude researchers param stem if true then use word stems instead of word strings param relation if true then return tuples of str pos relation_list if there is manually annotated relation info it will return tuples of str pos test_relation_list str pos gold_relation_list param strip_space if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param replace if true then use the replaced intended word instead of the original word e g wat will be replaced with watch return the given file s as a list of sentences each encoded as a list of word tag tuples rtype list list tuple str str param speaker if specified select specific speaker s defined in the corpus default is all all participants common choices are chi the child mot mother chi mot exclude researchers param stem if true then use word stems instead of word strings param relation if true then return tuples of str pos relation_list if there is manually annotated relation info it will return tuples of str pos test_relation_list str pos gold_relation_list param strip_space if true then strip trailing spaces from word tokens otherwise leave the spaces on the tokens param replace if true then use the replaced intended word instead of the original word e g wat will be replaced with watch return the given file s as a dict of corpus_property_key value rtype list dict return the given file s as a dict of participant_property_key value rtype list dict multidimensional dicts getting participants data return the given file s as string or int rtype list or int param month if true return months instead of year month date some files don t have age data some corpora don t have age information return the given file s as a floating number rtype list float if any part of the sentence is intelligible if the sentence is null if the sentence is the same as the last sent count number of fillers count number of morphemes e g read 1 morpheme but read past is 2 morphemes return mlu mlu wordnum numwords sentnum numsents ensure we have a list of speakers processing each xml doc select speakers getting replaced words get text strip tailing space stem if there is an inflection if there is a suffix pos relational the gold standard is stored in mor mor mor type trn gra type grt ready to use browser opener the base url for viewing files on the childes website this shouldn t need to be changed unless childes changes the configuration of their server or unless the user sets up their own corpus webserver map a corpus file to its web version on the childes website and open it in a web browser the complete url to be used is childes childes_url_base urlbase fileid replace xml cha if no urlbase is passed we try to calculate it this requires that the childes corpus was set up to mirror the folder hierarchy under childes psy cmu edu data xml e g nltk_data corpora childes eng usa cornell or nltk_data corpora childes romance spanish aguirre the function first looks as a special case if eng usa is on the path consisting of corpus root fileid then if childes possibly followed by data xml appears if neither one is found we use the unmodified fileid and hope for the best if this is not right specify urlbase explicitly e g if the corpus root points to the cornell folder urlbase eng usa cornell discard data xml if present strip xml and add cha as necessary pausing is a good idea but it s up to the user raw_input hit return to continue the childes corpus should be manually downloaded and saved to nltk_data_dir corpora childes describe all corpus the childes corpus or the parts you need should be manually downloaded from https childes talkbank org data xml and saved at nltk_data_dir corpora childes alternately you can call the demo with the path to a portion of the childes corpus e g demo path to childes data xml eng usa corpus_root_http urllib2 urlopen https childes talkbank org data xml eng usa bates zip corpus_root_http_bates zipfile zipfile cstringio stringio corpus_root_http read this fails childes childescorpusreader corpus_root_http_bates corpus_root_http_bates namelist
__docformat__ = "epytext en" import re from collections import defaultdict from nltk.corpus.reader.util import concat from nltk.corpus.reader.xmldocs import ElementTree, XMLCorpusReader from nltk.util import LazyConcatenation, LazyMap, flatten NS = "http://www.talkbank.org/ns/talkbank" class CHILDESCorpusReader(XMLCorpusReader): def __init__(self, root, fileids, lazy=True): XMLCorpusReader.__init__(self, root, fileids) self._lazy = lazy def words( self, fileids=None, speaker="ALL", stem=False, relation=False, strip_space=True, replace=False, ): sent = None pos = False if not self._lazy: return [ self._get_words( fileid, speaker, sent, stem, relation, pos, strip_space, replace ) for fileid in self.abspaths(fileids) ] get_words = lambda fileid: self._get_words( fileid, speaker, sent, stem, relation, pos, strip_space, replace ) return LazyConcatenation(LazyMap(get_words, self.abspaths(fileids))) def tagged_words( self, fileids=None, speaker="ALL", stem=False, relation=False, strip_space=True, replace=False, ): sent = None pos = True if not self._lazy: return [ self._get_words( fileid, speaker, sent, stem, relation, pos, strip_space, replace ) for fileid in self.abspaths(fileids) ] get_words = lambda fileid: self._get_words( fileid, speaker, sent, stem, relation, pos, strip_space, replace ) return LazyConcatenation(LazyMap(get_words, self.abspaths(fileids))) def sents( self, fileids=None, speaker="ALL", stem=False, relation=None, strip_space=True, replace=False, ): sent = True pos = False if not self._lazy: return [ self._get_words( fileid, speaker, sent, stem, relation, pos, strip_space, replace ) for fileid in self.abspaths(fileids) ] get_words = lambda fileid: self._get_words( fileid, speaker, sent, stem, relation, pos, strip_space, replace ) return LazyConcatenation(LazyMap(get_words, self.abspaths(fileids))) def tagged_sents( self, fileids=None, speaker="ALL", stem=False, relation=None, strip_space=True, replace=False, ): sent = True pos = True if not self._lazy: return [ self._get_words( fileid, speaker, sent, stem, relation, pos, strip_space, replace ) for fileid in self.abspaths(fileids) ] get_words = lambda fileid: self._get_words( fileid, speaker, sent, stem, relation, pos, strip_space, replace ) return LazyConcatenation(LazyMap(get_words, self.abspaths(fileids))) def corpus(self, fileids=None): if not self._lazy: return [self._get_corpus(fileid) for fileid in self.abspaths(fileids)] return LazyMap(self._get_corpus, self.abspaths(fileids)) def _get_corpus(self, fileid): results = dict() xmldoc = ElementTree.parse(fileid).getroot() for key, value in xmldoc.items(): results[key] = value return results def participants(self, fileids=None): if not self._lazy: return [self._get_participants(fileid) for fileid in self.abspaths(fileids)] return LazyMap(self._get_participants, self.abspaths(fileids)) def _get_participants(self, fileid): def dictOfDicts(): return defaultdict(dictOfDicts) xmldoc = ElementTree.parse(fileid).getroot() pat = dictOfDicts() for participant in xmldoc.findall( f".//{{{NS}}}Participants/{{{NS}}}participant" ): for (key, value) in participant.items(): pat[participant.get("id")][key] = value return pat def age(self, fileids=None, speaker="CHI", month=False): if not self._lazy: return [ self._get_age(fileid, speaker, month) for fileid in self.abspaths(fileids) ] get_age = lambda fileid: self._get_age(fileid, speaker, month) return LazyMap(get_age, self.abspaths(fileids)) def _get_age(self, fileid, speaker, month): xmldoc = ElementTree.parse(fileid).getroot() for pat in xmldoc.findall(f".//{{{NS}}}Participants/{{{NS}}}participant"): try: if pat.get("id") == speaker: age = pat.get("age") if month: age = self.convert_age(age) return age except (TypeError, AttributeError) as e: return None def convert_age(self, age_year): "Caclculate age in months from a string in CHILDES format" m = re.match(r"P(\d+)Y(\d+)M?(\d?\d?)D?", age_year) age_month = int(m.group(1)) * 12 + int(m.group(2)) try: if int(m.group(3)) > 15: age_month += 1 except ValueError as e: pass return age_month def MLU(self, fileids=None, speaker="CHI"): if not self._lazy: return [ self._getMLU(fileid, speaker=speaker) for fileid in self.abspaths(fileids) ] get_MLU = lambda fileid: self._getMLU(fileid, speaker=speaker) return LazyMap(get_MLU, self.abspaths(fileids)) def _getMLU(self, fileid, speaker): sents = self._get_words( fileid, speaker=speaker, sent=True, stem=True, relation=False, pos=True, strip_space=True, replace=True, ) results = [] lastSent = [] numFillers = 0 sentDiscount = 0 for sent in sents: posList = [pos for (word, pos) in sent] if any(pos == "unk" for pos in posList): continue elif sent == []: continue elif sent == lastSent: continue else: results.append([word for (word, pos) in sent]) if len({"co", None}.intersection(posList)) > 0: numFillers += posList.count("co") numFillers += posList.count(None) sentDiscount += 1 lastSent = sent try: thisWordList = flatten(results) numWords = ( len(flatten([word.split("-") for word in thisWordList])) - numFillers ) numSents = len(results) - sentDiscount mlu = numWords / numSents except ZeroDivisionError: mlu = 0 return mlu def _get_words( self, fileid, speaker, sent, stem, relation, pos, strip_space, replace ): if ( isinstance(speaker, str) and speaker != "ALL" ): speaker = [speaker] xmldoc = ElementTree.parse(fileid).getroot() results = [] for xmlsent in xmldoc.findall(".//{%s}u" % NS): sents = [] if speaker == "ALL" or xmlsent.get("who") in speaker: for xmlword in xmlsent.findall(".//{%s}w" % NS): infl = None suffixStem = None suffixTag = None if replace and xmlsent.find(f".//{{{NS}}}w/{{{NS}}}replacement"): xmlword = xmlsent.find( f".//{{{NS}}}w/{{{NS}}}replacement/{{{NS}}}w" ) elif replace and xmlsent.find(f".//{{{NS}}}w/{{{NS}}}wk"): xmlword = xmlsent.find(f".//{{{NS}}}w/{{{NS}}}wk") if xmlword.text: word = xmlword.text else: word = "" if strip_space: word = word.strip() if relation or stem: try: xmlstem = xmlword.find(".//{%s}stem" % NS) word = xmlstem.text except AttributeError as e: pass try: xmlinfl = xmlword.find( f".//{{{NS}}}mor/{{{NS}}}mw/{{{NS}}}mk" ) word += "-" + xmlinfl.text except: pass try: xmlsuffix = xmlword.find( ".//{%s}mor/{%s}mor-post/{%s}mw/{%s}stem" % (NS, NS, NS, NS) ) suffixStem = xmlsuffix.text except AttributeError: suffixStem = "" if suffixStem: word += "~" + suffixStem if relation or pos: try: xmlpos = xmlword.findall(".//{%s}c" % NS) xmlpos2 = xmlword.findall(".//{%s}s" % NS) if xmlpos2 != []: tag = xmlpos[0].text + ":" + xmlpos2[0].text else: tag = xmlpos[0].text except (AttributeError, IndexError) as e: tag = "" try: xmlsuffixpos = xmlword.findall( ".//{%s}mor/{%s}mor-post/{%s}mw/{%s}pos/{%s}c" % (NS, NS, NS, NS, NS) ) xmlsuffixpos2 = xmlword.findall( ".//{%s}mor/{%s}mor-post/{%s}mw/{%s}pos/{%s}s" % (NS, NS, NS, NS, NS) ) if xmlsuffixpos2: suffixTag = ( xmlsuffixpos[0].text + ":" + xmlsuffixpos2[0].text ) else: suffixTag = xmlsuffixpos[0].text except: pass if suffixTag: tag += "~" + suffixTag word = (word, tag) if relation == True: for xmlstem_rel in xmlword.findall( f".//{{{NS}}}mor/{{{NS}}}gra" ): if not xmlstem_rel.get("type") == "grt": word = ( word[0], word[1], xmlstem_rel.get("index") + "|" + xmlstem_rel.get("head") + "|" + xmlstem_rel.get("relation"), ) else: word = ( word[0], word[1], word[2], word[0], word[1], xmlstem_rel.get("index") + "|" + xmlstem_rel.get("head") + "|" + xmlstem_rel.get("relation"), ) try: for xmlpost_rel in xmlword.findall( f".//{{{NS}}}mor/{{{NS}}}mor-post/{{{NS}}}gra" ): if not xmlpost_rel.get("type") == "grt": suffixStem = ( suffixStem[0], suffixStem[1], xmlpost_rel.get("index") + "|" + xmlpost_rel.get("head") + "|" + xmlpost_rel.get("relation"), ) else: suffixStem = ( suffixStem[0], suffixStem[1], suffixStem[2], suffixStem[0], suffixStem[1], xmlpost_rel.get("index") + "|" + xmlpost_rel.get("head") + "|" + xmlpost_rel.get("relation"), ) except: pass sents.append(word) if sent or relation: results.append(sents) else: results.extend(sents) return LazyMap(lambda x: x, results) childes_url_base = r"https://childes.talkbank.org/browser/index.php?url=" def webview_file(self, fileid, urlbase=None): import webbrowser if urlbase: path = urlbase + "/" + fileid else: full = self.root + "/" + fileid full = re.sub(r"\\", "/", full) if "/childes/" in full.lower(): path = re.findall(r"(?i)/childes(?:/data-xml)?/(.*)\.xml", full)[0] elif "eng-usa" in full.lower(): path = "Eng-USA/" + re.findall(r"/(?i)Eng-USA/(.*)\.xml", full)[0] else: path = fileid if path.endswith(".xml"): path = path[:-4] if not path.endswith(".cha"): path = path + ".cha" url = self.childes_url_base + path webbrowser.open_new_tab(url) print("Opening in browser:", url) def demo(corpus_root=None): if not corpus_root: from nltk.data import find corpus_root = find("corpora/childes/data-xml/Eng-USA/") try: childes = CHILDESCorpusReader(corpus_root, ".*.xml") for file in childes.fileids()[:5]: corpus = "" corpus_id = "" for (key, value) in childes.corpus(file)[0].items(): if key == "Corpus": corpus = value if key == "Id": corpus_id = value print("Reading", corpus, corpus_id, " .....") print("words:", childes.words(file)[:7], "...") print( "words with replaced words:", childes.words(file, replace=True)[:7], " ...", ) print("words with pos tags:", childes.tagged_words(file)[:7], " ...") print("words (only MOT):", childes.words(file, speaker="MOT")[:7], "...") print("words (only CHI):", childes.words(file, speaker="CHI")[:7], "...") print("stemmed words:", childes.words(file, stem=True)[:7], " ...") print( "words with relations and pos-tag:", childes.words(file, relation=True)[:5], " ...", ) print("sentence:", childes.sents(file)[:2], " ...") for (participant, values) in childes.participants(file)[0].items(): for (key, value) in values.items(): print("\tparticipant", participant, key, ":", value) print("num of sent:", len(childes.sents(file))) print("num of morphemes:", len(childes.words(file, stem=True))) print("age:", childes.age(file)) print("age in month:", childes.age(file, month=True)) print("MLU:", childes.MLU(file)) print() except LookupError as e: print( ) if __name__ == "__main__": demo()
natural language toolkit chunked corpus reader c 20012023 nltk project steven bird stevenbird1gmail com edward loper edlopergmail com url https www nltk org for license information see license txt a reader for corpora that contain chunked and optionally tagged documents reader for chunked and optionally tagged corpora paragraphs are split using a block reader they are then tokenized into sentences using a sentence tokenizer finally these sentences are parsed into chunk trees using a stringtochunktree conversion function each of these steps can be performed using a default function or a custom function by default paragraphs are split on blank lines sentences are listed one per line and sentences are parsed into chunk trees using nltk chunk tagstr2tree param root the root directory for this corpus param fileids a list or regexp specifying the fileids in this corpus arguments for corpus views generated by this corpus a tuple str2chunktree senttokenizer parablocktokenizer def wordsself fileidsnone return concat chunkedcorpusviewf enc 0 0 0 0 self cvargs for f enc in self abspathsfileids true def sentsself fileidsnone return concat chunkedcorpusviewf enc 0 1 0 0 self cvargs for f enc in self abspathsfileids true def parasself fileidsnone return concat chunkedcorpusviewf enc 0 1 1 0 self cvargs for f enc in self abspathsfileids true def taggedwordsself fileidsnone tagsetnone return concat chunkedcorpusview f enc 1 0 0 0 self cvargs targettagsettagset for f enc in self abspathsfileids true def taggedsentsself fileidsnone tagsetnone return concat chunkedcorpusview f enc 1 1 0 0 self cvargs targettagsettagset for f enc in self abspathsfileids true def taggedparasself fileidsnone tagsetnone return concat chunkedcorpusview f enc 1 1 1 0 self cvargs targettagsettagset for f enc in self abspathsfileids true def chunkedwordsself fileidsnone tagsetnone return concat chunkedcorpusview f enc 1 0 0 1 self cvargs targettagsettagset for f enc in self abspathsfileids true def chunkedsentsself fileidsnone tagsetnone return concat chunkedcorpusview f enc 1 1 0 1 self cvargs targettagsettagset for f enc in self abspathsfileids true def chunkedparasself fileidsnone tagsetnone return concat chunkedcorpusview f enc 1 1 1 1 self cvargs targettagsettagset for f enc in self abspathsfileids true def readblockself stream return tagstr2treet for t in readblanklineblockstream class chunkedcorpusviewstreambackedcorpusview def init self fileid encoding tagged groupbysent groupbypara chunked str2chunktree senttokenizer parablockreader sourcetagsetnone targettagsetnone streambackedcorpusview initself fileid encodingencoding self tagged tagged self groupbysent groupbysent self groupbypara groupbypara self chunked chunked self str2chunktree str2chunktree self senttokenizer senttokenizer self parablockreader parablockreader self sourcetagset sourcetagset self targettagset targettagset def readblockself stream block for parastr in self parablockreaderstream para for sentstr in self senttokenizer tokenizeparastr sent self str2chunktree sentstr sourcetagsetself sourcetagset targettagsetself targettagset if requested throw away the tags if not self tagged sent self untagsent if requested throw away the chunks if not self chunked sent sent leaves add the sentence to para if self groupbysent para appendsent else para extendsent add the paragraph to block if self groupbypara block appendpara else block extendpara return the block return block def untagself tree for i child in enumeratetree if isinstancechild tree self untagchild elif isinstancechild tuple treei child0 else raise valueerrorexpected child to be tree or tuple return tree natural language toolkit chunked corpus reader c 2001 2023 nltk project steven bird stevenbird1 gmail com edward loper edloper gmail com url https www nltk org for license information see license txt a reader for corpora that contain chunked and optionally tagged documents reader for chunked and optionally tagged corpora paragraphs are split using a block reader they are then tokenized into sentences using a sentence tokenizer finally these sentences are parsed into chunk trees using a string to chunktree conversion function each of these steps can be performed using a default function or a custom function by default paragraphs are split on blank lines sentences are listed one per line and sentences are parsed into chunk trees using nltk chunk tagstr2tree param root the root directory for this corpus param fileids a list or regexp specifying the fileids in this corpus arguments for corpus views generated by this corpus a tuple str2chunktree sent_tokenizer para_block_tokenizer return the given file s as a list of words and punctuation symbols rtype list str return the given file s as a list of sentences or utterances each encoded as a list of word strings rtype list list str return the given file s as a list of paragraphs each encoded as a list of sentences which are in turn encoded as lists of word strings rtype list list list str return the given file s as a list of tagged words and punctuation symbols encoded as tuples word tag rtype list tuple str str return the given file s as a list of sentences each encoded as a list of word tag tuples rtype list list tuple str str return the given file s as a list of paragraphs each encoded as a list of sentences which are in turn encoded as lists of word tag tuples rtype list list list tuple str str return the given file s as a list of tagged words and chunks words are encoded as word tag tuples if the corpus has tags or word strings if the corpus has no tags chunks are encoded as depth one trees over word tag tuples or word strings rtype list tuple str str and tree return the given file s as a list of sentences each encoded as a shallow tree the leaves of these trees are encoded as word tag tuples if the corpus has tags or word strings if the corpus has no tags rtype list tree return the given file s as a list of paragraphs each encoded as a list of sentences which are in turn encoded as a shallow tree the leaves of these trees are encoded as word tag tuples if the corpus has tags or word strings if the corpus has no tags rtype list list tree if requested throw away the tags if requested throw away the chunks add the sentence to para add the paragraph to block return the block
import codecs import os.path import nltk from nltk.chunk import tagstr2tree from nltk.corpus.reader.api import * from nltk.corpus.reader.bracket_parse import BracketParseCorpusReader from nltk.corpus.reader.util import * from nltk.tokenize import * from nltk.tree import Tree class ChunkedCorpusReader(CorpusReader): def __init__( self, root, fileids, extension="", str2chunktree=tagstr2tree, sent_tokenizer=RegexpTokenizer("\n", gaps=True), para_block_reader=read_blankline_block, encoding="utf8", tagset=None, ): CorpusReader.__init__(self, root, fileids, encoding) self._cv_args = (str2chunktree, sent_tokenizer, para_block_reader, tagset) def words(self, fileids=None): return concat( [ ChunkedCorpusView(f, enc, 0, 0, 0, 0, *self._cv_args) for (f, enc) in self.abspaths(fileids, True) ] ) def sents(self, fileids=None): return concat( [ ChunkedCorpusView(f, enc, 0, 1, 0, 0, *self._cv_args) for (f, enc) in self.abspaths(fileids, True) ] ) def paras(self, fileids=None): return concat( [ ChunkedCorpusView(f, enc, 0, 1, 1, 0, *self._cv_args) for (f, enc) in self.abspaths(fileids, True) ] ) def tagged_words(self, fileids=None, tagset=None): return concat( [ ChunkedCorpusView( f, enc, 1, 0, 0, 0, *self._cv_args, target_tagset=tagset ) for (f, enc) in self.abspaths(fileids, True) ] ) def tagged_sents(self, fileids=None, tagset=None): return concat( [ ChunkedCorpusView( f, enc, 1, 1, 0, 0, *self._cv_args, target_tagset=tagset ) for (f, enc) in self.abspaths(fileids, True) ] ) def tagged_paras(self, fileids=None, tagset=None): return concat( [ ChunkedCorpusView( f, enc, 1, 1, 1, 0, *self._cv_args, target_tagset=tagset ) for (f, enc) in self.abspaths(fileids, True) ] ) def chunked_words(self, fileids=None, tagset=None): return concat( [ ChunkedCorpusView( f, enc, 1, 0, 0, 1, *self._cv_args, target_tagset=tagset ) for (f, enc) in self.abspaths(fileids, True) ] ) def chunked_sents(self, fileids=None, tagset=None): return concat( [ ChunkedCorpusView( f, enc, 1, 1, 0, 1, *self._cv_args, target_tagset=tagset ) for (f, enc) in self.abspaths(fileids, True) ] ) def chunked_paras(self, fileids=None, tagset=None): return concat( [ ChunkedCorpusView( f, enc, 1, 1, 1, 1, *self._cv_args, target_tagset=tagset ) for (f, enc) in self.abspaths(fileids, True) ] ) def _read_block(self, stream): return [tagstr2tree(t) for t in read_blankline_block(stream)] class ChunkedCorpusView(StreamBackedCorpusView): def __init__( self, fileid, encoding, tagged, group_by_sent, group_by_para, chunked, str2chunktree, sent_tokenizer, para_block_reader, source_tagset=None, target_tagset=None, ): StreamBackedCorpusView.__init__(self, fileid, encoding=encoding) self._tagged = tagged self._group_by_sent = group_by_sent self._group_by_para = group_by_para self._chunked = chunked self._str2chunktree = str2chunktree self._sent_tokenizer = sent_tokenizer self._para_block_reader = para_block_reader self._source_tagset = source_tagset self._target_tagset = target_tagset def read_block(self, stream): block = [] for para_str in self._para_block_reader(stream): para = [] for sent_str in self._sent_tokenizer.tokenize(para_str): sent = self._str2chunktree( sent_str, source_tagset=self._source_tagset, target_tagset=self._target_tagset, ) if not self._tagged: sent = self._untag(sent) if not self._chunked: sent = sent.leaves() if self._group_by_sent: para.append(sent) else: para.extend(sent) if self._group_by_para: block.append(para) else: block.extend(para) return block def _untag(self, tree): for i, child in enumerate(tree): if isinstance(child, Tree): self._untag(child) elif isinstance(child, tuple): tree[i] = child[0] else: raise ValueError("expected child to be Tree or tuple") return tree
natural language toolkit carnegie mellon pronouncing dictionary corpus reader c 20012023 nltk project steven bird stevenbird1gmail com url https www nltk org for license information see license txt the carnegie mellon pronouncing dictionary cmudict 0 6 ftp ftp cs cmu eduprojectspeechdict 1998 carnegie mellon university file format each line consists of an uppercased word a counter for alternative pronunciations and a transcription vowels are marked for stress 1primary 2secondary 0no stress e g natural 1 n ae1 ch er0 ah0 l the dictionary contains 127069 entries of these 119400 words are assigned a unique pronunciation 6830 words have two pronunciations and 839 words have three or more pronunciations many of these are fastspeech variants phonemes there are 39 phonemes as shown below phoneme example translation phoneme example translation aa odd aa d ae at ae t ah hut hh ah t ao ought ao t aw cow k aw ay hide hh ay d b be b iy ch cheese ch iy z d dee d iy dh thee dh iy eh ed eh d er hurt hh er t ey ate ey t f fee f iy g green g r iy n hh he hh iy ih it ih t iy eat iy t jh gee jh iy k key k iy l lee l iy m me m iy n knee n iy ng ping p ih ng ow oat ow t oy toy t oy p pee p iy r read r iy d s sea s iy sh she sh iy t tea t iy th theta th ey t ah uh hood hh uh d uw two t uw v vee v iy w we w iy y yield y iy l d z zee z iy zh seizure s iy zh er return the cmudict lexicon as a list of entries containing word transcriptions tuples return a list of all words defined in the cmudict lexicon return the cmudict lexicon as a dictionary whose keys are lowercase words and whose values are lists of pronunciations natural language toolkit carnegie mellon pronouncing dictionary corpus reader c 2001 2023 nltk project steven bird stevenbird1 gmail com url https www nltk org for license information see license txt the carnegie mellon pronouncing dictionary cmudict 0 6 ftp ftp cs cmu edu project speech dict 1998 carnegie mellon university file format each line consists of an uppercased word a counter for alternative pronunciations and a transcription vowels are marked for stress 1 primary 2 secondary 0 no stress e g natural 1 n ae1 ch er0 ah0 l the dictionary contains 127069 entries of these 119400 words are assigned a unique pronunciation 6830 words have two pronunciations and 839 words have three or more pronunciations many of these are fast speech variants phonemes there are 39 phonemes as shown below phoneme example translation phoneme example translation aa odd aa d ae at ae t ah hut hh ah t ao ought ao t aw cow k aw ay hide hh ay d b be b iy ch cheese ch iy z d dee d iy dh thee dh iy eh ed eh d er hurt hh er t ey ate ey t f fee f iy g green g r iy n hh he hh iy ih it ih t iy eat iy t jh gee jh iy k key k iy l lee l iy m me m iy n knee n iy ng ping p ih ng ow oat ow t oy toy t oy p pee p iy r read r iy d s sea s iy sh she sh iy t tea t iy th theta th ey t ah uh hood hh uh d uw two t uw v vee v iy w we w iy y yield y iy l d z zee z iy zh seizure s iy zh er return the cmudict lexicon as a list of entries containing word transcriptions tuples return a list of all words defined in the cmudict lexicon return the cmudict lexicon as a dictionary whose keys are lowercase words and whose values are lists of pronunciations read 100 at a time end of file
from nltk.corpus.reader.api import * from nltk.corpus.reader.util import * from nltk.util import Index class CMUDictCorpusReader(CorpusReader): def entries(self): return concat( [ StreamBackedCorpusView(fileid, read_cmudict_block, encoding=enc) for fileid, enc in self.abspaths(None, True) ] ) def words(self): return [word.lower() for (word, _) in self.entries()] def dict(self): return dict(Index(self.entries())) def read_cmudict_block(stream): entries = [] while len(entries) < 100: line = stream.readline() if line == "": return entries pieces = line.split() entries.append((pieces[0].lower(), pieces[2:])) return entries
natural language toolkit comparative sentence corpus reader c 20012023 nltk project pierpaolo pantone 24alsecondogmail com url https www nltk org for license information see license txt corpusreader for the comparative sentence dataset comparative sentence dataset information annotated by nitin jindal and bing liu 2006 department of computer sicence university of illinois at chicago contact nitin jindal njindalcs uic edu bing liu liubcs uic edu https www cs uic eduliub distributed with permission related papers nitin jindal and bing liu identifying comparative sentences in text documents proceedings of the acm sigir international conference on information retrieval sigir06 2006 nitin jindal and bing liu mining comprative sentences and relations proceedings of twenty first national conference on artificial intelligence aaai2006 2006 murthy ganapathibhotla and bing liu mining opinions in comparative sentences proceedings of the 22nd international conference on computational linguistics coling2008 manchester 1822 august 2008 regular expressions for dataset components a comparison represents a comparative sentence and its constituents param text a string optionally tokenized containing a comparison param comptype an integer defining the type of comparison expressed values can be 1 nonequal gradable 2 equative 3 superlative 4 nongradable param entity1 the first entity considered in the comparison relation param entity2 the second entity considered in the comparison relation param feature the feature considered in the comparison relation param keyword the word or phrase which is used for that comparative relation reader for the comparative sentence dataset by jindal and liu 2006 from nltk corpus import comparativesentences comparison comparativesentences comparisons0 comparison text doctest normalizewhitespace its fastforward and rewind work much more smoothly and consistently than those of other models i ve had comparison entity2 models comparison feature comparison keyword rewind more lencomparativesentences comparisons 853 param root the root directory for this corpus param fileids a list or regexp specifying the fileids in this corpus param wordtokenizer tokenizer for breaking sentences or paragraphs into words default whitespacetokenizer param senttokenizer tokenizer for breaking paragraphs into sentences param encoding the encoding that should be used to read the corpus return all comparisons in the corpus param fileids a list or regexp specifying the ids of the files whose comparisons have to be returned return the given files as a list of comparison objects rtype listcomparison return a set of all keywords used in the corpus param fileids a list or regexp specifying the ids of the files whose keywords have to be returned return the set of keywords and comparative phrases used in the corpus rtype setstr return the list of words and constituents considered as clues of a comparison from listofkeywords txt return all sentences in the corpus param fileids a list or regexp specifying the ids of the files whose sentences have to be returned return all sentences of the corpus as lists of tokens or as plain strings if no word tokenizer is specified rtype listliststr or liststr return all words and punctuation symbols in the corpus param fileids a list or regexp specifying the ids of the files whose words have to be returned return the given files as a list of words and punctuation symbols rtype liststr advance to the next line it contains the comparative sentence skip the next line it contains closing comparison tags if gradable comparisons are found create comparison instances and populate their fields each comparison tag has its own relations on a separate line if nongradable comparisons are found create a simple comparison instance for each one comptype in this case should always be 4 flatten the list of comparisons before returning them return concatcomparisonbundle natural language toolkit comparative sentence corpus reader c 2001 2023 nltk project pierpaolo pantone 24alsecondo gmail com url https www nltk org for license information see license txt corpusreader for the comparative sentence dataset comparative sentence dataset information annotated by nitin jindal and bing liu 2006 department of computer sicence university of illinois at chicago contact nitin jindal njindal cs uic edu bing liu liub cs uic edu https www cs uic edu liub distributed with permission related papers nitin jindal and bing liu identifying comparative sentences in text documents proceedings of the acm sigir international conference on information retrieval sigir 06 2006 nitin jindal and bing liu mining comprative sentences and relations proceedings of twenty first national conference on artificial intelligence aaai 2006 2006 murthy ganapathibhotla and bing liu mining opinions in comparative sentences proceedings of the 22nd international conference on computational linguistics coling 2008 manchester 18 22 august 2008 regular expressions for dataset components a comparison represents a comparative sentence and its constituents param text a string optionally tokenized containing a comparison param comp_type an integer defining the type of comparison expressed values can be 1 non equal gradable 2 equative 3 superlative 4 non gradable param entity_1 the first entity considered in the comparison relation param entity_2 the second entity considered in the comparison relation param feature the feature considered in the comparison relation param keyword the word or phrase which is used for that comparative relation reader for the comparative sentence dataset by jindal and liu 2006 from nltk corpus import comparative_sentences comparison comparative_sentences comparisons 0 comparison text doctest normalize_whitespace its fast forward and rewind work much more smoothly and consistently than those of other models i ve had comparison entity_2 models comparison feature comparison keyword rewind more len comparative_sentences comparisons 853 param root the root directory for this corpus param fileids a list or regexp specifying the fileids in this corpus param word_tokenizer tokenizer for breaking sentences or paragraphs into words default whitespacetokenizer param sent_tokenizer tokenizer for breaking paragraphs into sentences param encoding the encoding that should be used to read the corpus return all comparisons in the corpus param fileids a list or regexp specifying the ids of the files whose comparisons have to be returned return the given file s as a list of comparison objects rtype list comparison return a set of all keywords used in the corpus param fileids a list or regexp specifying the ids of the files whose keywords have to be returned return the set of keywords and comparative phrases used in the corpus rtype set str return the list of words and constituents considered as clues of a comparison from listofkeywords txt return all sentences in the corpus param fileids a list or regexp specifying the ids of the files whose sentences have to be returned return all sentences of the corpus as lists of tokens or as plain strings if no word tokenizer is specified rtype list list str or list str return all words and punctuation symbols in the corpus param fileids a list or regexp specifying the ids of the files whose words have to be returned return the given file s as a list of words and punctuation symbols rtype list str end of file advance to the next line it contains the comparative sentence skip the next line it contains closing comparison tags if gradable comparisons are found create comparison instances and populate their fields each comparison tag has its own relations on a separate line if non gradable comparisons are found create a simple comparison instance for each one comp_type in this case should always be 4 flatten the list of comparisons before returning them return concat comparison_bundle
import re from nltk.corpus.reader.api import * from nltk.tokenize import * STARS = re.compile(r"^\*+$") COMPARISON = re.compile(r"<cs-[1234]>") CLOSE_COMPARISON = re.compile(r"</cs-[1234]>") GRAD_COMPARISON = re.compile(r"<cs-[123]>") NON_GRAD_COMPARISON = re.compile(r"<cs-4>") ENTITIES_FEATS = re.compile(r"(\d)_((?:[\.\w\s/-](?!\d_))+)") KEYWORD = re.compile(r"\(([^\(]*)\)$") class Comparison: def __init__( self, text=None, comp_type=None, entity_1=None, entity_2=None, feature=None, keyword=None, ): self.text = text self.comp_type = comp_type self.entity_1 = entity_1 self.entity_2 = entity_2 self.feature = feature self.keyword = keyword def __repr__(self): return ( 'Comparison(text="{}", comp_type={}, entity_1="{}", entity_2="{}", ' 'feature="{}", keyword="{}")' ).format( self.text, self.comp_type, self.entity_1, self.entity_2, self.feature, self.keyword, ) class ComparativeSentencesCorpusReader(CorpusReader): CorpusView = StreamBackedCorpusView def __init__( self, root, fileids, word_tokenizer=WhitespaceTokenizer(), sent_tokenizer=None, encoding="utf8", ): CorpusReader.__init__(self, root, fileids, encoding) self._word_tokenizer = word_tokenizer self._sent_tokenizer = sent_tokenizer self._readme = "README.txt" def comparisons(self, fileids=None): if fileids is None: fileids = self._fileids elif isinstance(fileids, str): fileids = [fileids] return concat( [ self.CorpusView(path, self._read_comparison_block, encoding=enc) for (path, enc, fileid) in self.abspaths(fileids, True, True) ] ) def keywords(self, fileids=None): all_keywords = concat( [ self.CorpusView(path, self._read_keyword_block, encoding=enc) for (path, enc, fileid) in self.abspaths(fileids, True, True) ] ) keywords_set = {keyword.lower() for keyword in all_keywords if keyword} return keywords_set def keywords_readme(self): keywords = [] with self.open("listOfkeywords.txt") as fp: raw_text = fp.read() for line in raw_text.split("\n"): if not line or line.startswith("//"): continue keywords.append(line.strip()) return keywords def sents(self, fileids=None): return concat( [ self.CorpusView(path, self._read_sent_block, encoding=enc) for (path, enc, fileid) in self.abspaths(fileids, True, True) ] ) def words(self, fileids=None): return concat( [ self.CorpusView(path, self._read_word_block, encoding=enc) for (path, enc, fileid) in self.abspaths(fileids, True, True) ] ) def _read_comparison_block(self, stream): while True: line = stream.readline() if not line: return [] comparison_tags = re.findall(COMPARISON, line) if comparison_tags: grad_comparisons = re.findall(GRAD_COMPARISON, line) non_grad_comparisons = re.findall(NON_GRAD_COMPARISON, line) comparison_text = stream.readline().strip() if self._word_tokenizer: comparison_text = self._word_tokenizer.tokenize(comparison_text) stream.readline() comparison_bundle = [] if grad_comparisons: for comp in grad_comparisons: comp_type = int(re.match(r"<cs-(\d)>", comp).group(1)) comparison = Comparison( text=comparison_text, comp_type=comp_type ) line = stream.readline() entities_feats = ENTITIES_FEATS.findall(line) if entities_feats: for (code, entity_feat) in entities_feats: if code == "1": comparison.entity_1 = entity_feat.strip() elif code == "2": comparison.entity_2 = entity_feat.strip() elif code == "3": comparison.feature = entity_feat.strip() keyword = KEYWORD.findall(line) if keyword: comparison.keyword = keyword[0] comparison_bundle.append(comparison) if non_grad_comparisons: for comp in non_grad_comparisons: comp_type = int(re.match(r"<cs-(\d)>", comp).group(1)) comparison = Comparison( text=comparison_text, comp_type=comp_type ) comparison_bundle.append(comparison) return comparison_bundle def _read_keyword_block(self, stream): keywords = [] for comparison in self._read_comparison_block(stream): keywords.append(comparison.keyword) return keywords def _read_sent_block(self, stream): while True: line = stream.readline() if re.match(STARS, line): while True: line = stream.readline() if re.match(STARS, line): break continue if ( not re.findall(COMPARISON, line) and not ENTITIES_FEATS.findall(line) and not re.findall(CLOSE_COMPARISON, line) ): if self._sent_tokenizer: return [ self._word_tokenizer.tokenize(sent) for sent in self._sent_tokenizer.tokenize(line) ] else: return [self._word_tokenizer.tokenize(line)] def _read_word_block(self, stream): words = [] for sent in self._read_sent_block(stream): words.extend(sent) return words
natural language toolkit conll corpus reader c 20012023 nltk project steven bird stevenbird1gmail com edward loper edlopergmail com url https www nltk org for license information see license txt read conllstyle chunk fileids a corpus reader for conllstyle files these files consist of a series of sentences separated by blank lines each sentence is encoded using a table or grid of values where each line corresponds to a single word and each column corresponds to an annotation type the set of columns used by conllstyle files can vary from corpus to corpus the conllcorpusreader constructor therefore takes an argument columntypes which is used to specify the columns that are used by a given corpus by default columns are split by consecutive whitespaces with the separator argument you can set a string to split by e g t todo add support for reading from corpora where different parallel files contain different columns todo possibly add caching of the grid corpus view this would allow the same grid view to be used by different data access methods eg words and parsedsents could both share the same grid corpus view object todo better support for docstart currently we just ignore it but it could be used to define methods that retrieve a document at a time eg parseddocuments column types a list of all column types supported by the conll corpus reader constructor data access methods return a list of wordtagiob tuples rtype listtuple param fileids the list of fileids that make up this corpus type fileids none or str or list return a list of lists of wordtagiob tuples rtype listlist param fileids the list of fileids that make up this corpus type fileids none or str or list grid reading n b we could cache the object returned here keyed on fileids which would let us reuse the same corpus view for different things eg srl and parse trees if there s a docstart row then discard xx eventually it would be good to actually use it check that the grid is consistent transforms given a grid transform it into some representation e g a list of words or a parse tree n b this method is very similar to conllstr2tree if it s a chunk we don t care about treat it as o treat a mismatching i like a b for b or i close any open chunks for b start a new chunk add the word token list of list of start end tag tuples count how many predicates there are this tells us how many columns to expect for srl data decide which spanlist to use don t assume that they re sorted in the same order as the predicates even though they usually are helper methods an srl instance from a conll corpus which identifies and providing labels for the arguments of a single verb xx add inst corearguments inst argmarguments a list of the word indices of the words that compose the verb whose arguments are identified by this instance this will contain multiple word indices when multiword verbs are used e g turn on self verbhead verbhead a list of argspan argid tuples specifying the location and type for each of the arguments identified by this instance argspan is a tuple start end indicating that the argument consists of the wordsstart end self taggedspans taggedspans the parse tree for the sentence containing this instance self words tree leaves fill in the self verb and self arguments values originally its plural s if lenself arguments 1 else set of instances for a single sentence sanity check trees should be the same if desired add trees optional tree columns verb head column remaining columns self a conllcorpusreader whose data file contains three columns words pos and chunk natural language toolkit conll corpus reader c 2001 2023 nltk project steven bird stevenbird1 gmail com edward loper edloper gmail com url https www nltk org for license information see license txt read conll style chunk fileids a corpus reader for conll style files these files consist of a series of sentences separated by blank lines each sentence is encoded using a table or grid of values where each line corresponds to a single word and each column corresponds to an annotation type the set of columns used by conll style files can vary from corpus to corpus the conllcorpusreader constructor therefore takes an argument columntypes which is used to specify the columns that are used by a given corpus by default columns are split by consecutive whitespaces with the separator argument you can set a string to split by e g t todo add support for reading from corpora where different parallel files contain different columns todo possibly add caching of the grid corpus view this would allow the same grid view to be used by different data access methods eg words and parsed_sents could both share the same grid corpus view object todo better support for docstart currently we just ignore it but it could be used to define methods that retrieve a document at a time eg parsed_documents column types column type for words column type for part of speech tags column type for parse trees column type for chunk structures column type for named entities column type for semantic role labels column type for column that should be ignored a list of all column types supported by the conll corpus reader constructor for chunks data access methods capture chunk_types as local var capture chunk_types as local var capture pos_in_tree as local var capture pos_in_tree as local var return a list of word tag iob tuples rtype list tuple param fileids the list of fileids that make up this corpus type fileids none or str or list return a list of lists of word tag iob tuples rtype list list param fileids the list of fileids that make up this corpus type fileids none or str or list grid reading n b we could cache the object returned here keyed on fileids which would let us reuse the same corpus view for different things eg srl and parse trees if there s a docstart row then discard xx eventually it would be good to actually use it check that the grid is consistent transforms given a grid transform it into some representation e g a list of words or a parse tree n b this method is very similar to conllstr2tree if it s a chunk we don t care about treat it as o treat a mismatching i like a b for b or i close any open chunks for b start a new chunk add the word token only keep list of list of start end tag tuples count how many predicates there are this tells us how many columns to expect for srl data decide which spanlist to use don t assume that they re sorted in the same order as the predicates even though they usually are helper methods an srl instance from a conll corpus which identifies and providing labels for the arguments of a single verb xx add inst core_arguments inst argm_arguments a list of the word indices of the words that compose the verb whose arguments are identified by this instance this will contain multiple word indices when multi word verbs are used e g turn on the word index of the head word of the verb whose arguments are identified by this instance e g for a sentence that uses the verb turn on verb_head will be the word index of the word turn a list of argspan argid tuples specifying the location and type for each of the arguments identified by this instance argspan is a tuple start end indicating that the argument consists of the words start end a list of span id tuples specifying the location and type for each of the arguments as well as the verb pieces that make up this instance the parse tree for the sentence containing this instance a list of the words in the sentence containing this instance fill in the self verb and self arguments values originally its plural s if len self arguments 1 else set of instances for a single sentence sanity check trees should be the same if desired add trees optional tree columns verb head column remaining columns self a conllcorpusreader whose data file contains three columns words pos and chunk
import textwrap from nltk.corpus.reader.api import * from nltk.corpus.reader.util import * from nltk.tag import map_tag from nltk.tree import Tree from nltk.util import LazyConcatenation, LazyMap class ConllCorpusReader(CorpusReader): WORDS = "words" POS = "pos" TREE = "tree" CHUNK = "chunk" NE = "ne" SRL = "srl" IGNORE = "ignore" COLUMN_TYPES = (WORDS, POS, TREE, CHUNK, NE, SRL, IGNORE) def __init__( self, root, fileids, columntypes, chunk_types=None, root_label="S", pos_in_tree=False, srl_includes_roleset=True, encoding="utf8", tree_class=Tree, tagset=None, separator=None, ): for columntype in columntypes: if columntype not in self.COLUMN_TYPES: raise ValueError("Bad column type %r" % columntype) if isinstance(chunk_types, str): chunk_types = [chunk_types] self._chunk_types = chunk_types self._colmap = {c: i for (i, c) in enumerate(columntypes)} self._pos_in_tree = pos_in_tree self._root_label = root_label self._srl_includes_roleset = srl_includes_roleset self._tree_class = tree_class CorpusReader.__init__(self, root, fileids, encoding) self._tagset = tagset self.sep = separator def words(self, fileids=None): self._require(self.WORDS) return LazyConcatenation(LazyMap(self._get_words, self._grids(fileids))) def sents(self, fileids=None): self._require(self.WORDS) return LazyMap(self._get_words, self._grids(fileids)) def tagged_words(self, fileids=None, tagset=None): self._require(self.WORDS, self.POS) def get_tagged_words(grid): return self._get_tagged_words(grid, tagset) return LazyConcatenation(LazyMap(get_tagged_words, self._grids(fileids))) def tagged_sents(self, fileids=None, tagset=None): self._require(self.WORDS, self.POS) def get_tagged_words(grid): return self._get_tagged_words(grid, tagset) return LazyMap(get_tagged_words, self._grids(fileids)) def chunked_words(self, fileids=None, chunk_types=None, tagset=None): self._require(self.WORDS, self.POS, self.CHUNK) if chunk_types is None: chunk_types = self._chunk_types def get_chunked_words(grid): return self._get_chunked_words(grid, chunk_types, tagset) return LazyConcatenation(LazyMap(get_chunked_words, self._grids(fileids))) def chunked_sents(self, fileids=None, chunk_types=None, tagset=None): self._require(self.WORDS, self.POS, self.CHUNK) if chunk_types is None: chunk_types = self._chunk_types def get_chunked_words(grid): return self._get_chunked_words(grid, chunk_types, tagset) return LazyMap(get_chunked_words, self._grids(fileids)) def parsed_sents(self, fileids=None, pos_in_tree=None, tagset=None): self._require(self.WORDS, self.POS, self.TREE) if pos_in_tree is None: pos_in_tree = self._pos_in_tree def get_parsed_sent(grid): return self._get_parsed_sent(grid, pos_in_tree, tagset) return LazyMap(get_parsed_sent, self._grids(fileids)) def srl_spans(self, fileids=None): self._require(self.SRL) return LazyMap(self._get_srl_spans, self._grids(fileids)) def srl_instances(self, fileids=None, pos_in_tree=None, flatten=True): self._require(self.WORDS, self.POS, self.TREE, self.SRL) if pos_in_tree is None: pos_in_tree = self._pos_in_tree def get_srl_instances(grid): return self._get_srl_instances(grid, pos_in_tree) result = LazyMap(get_srl_instances, self._grids(fileids)) if flatten: result = LazyConcatenation(result) return result def iob_words(self, fileids=None, tagset=None): self._require(self.WORDS, self.POS, self.CHUNK) def get_iob_words(grid): return self._get_iob_words(grid, tagset) return LazyConcatenation(LazyMap(get_iob_words, self._grids(fileids))) def iob_sents(self, fileids=None, tagset=None): self._require(self.WORDS, self.POS, self.CHUNK) def get_iob_words(grid): return self._get_iob_words(grid, tagset) return LazyMap(get_iob_words, self._grids(fileids)) def _grids(self, fileids=None): return concat( [ StreamBackedCorpusView(fileid, self._read_grid_block, encoding=enc) for (fileid, enc) in self.abspaths(fileids, True) ] ) def _read_grid_block(self, stream): grids = [] for block in read_blankline_block(stream): block = block.strip() if not block: continue grid = [line.split(self.sep) for line in block.split("\n")] if grid[0][self._colmap.get("words", 0)] == "-DOCSTART-": del grid[0] for row in grid: if len(row) != len(grid[0]): raise ValueError("Inconsistent number of columns:\n%s" % block) grids.append(grid) return grids def _get_words(self, grid): return self._get_column(grid, self._colmap["words"]) def _get_tagged_words(self, grid, tagset=None): pos_tags = self._get_column(grid, self._colmap["pos"]) if tagset and tagset != self._tagset: pos_tags = [map_tag(self._tagset, tagset, t) for t in pos_tags] return list(zip(self._get_column(grid, self._colmap["words"]), pos_tags)) def _get_iob_words(self, grid, tagset=None): pos_tags = self._get_column(grid, self._colmap["pos"]) if tagset and tagset != self._tagset: pos_tags = [map_tag(self._tagset, tagset, t) for t in pos_tags] return list( zip( self._get_column(grid, self._colmap["words"]), pos_tags, self._get_column(grid, self._colmap["chunk"]), ) ) def _get_chunked_words(self, grid, chunk_types, tagset=None): words = self._get_column(grid, self._colmap["words"]) pos_tags = self._get_column(grid, self._colmap["pos"]) if tagset and tagset != self._tagset: pos_tags = [map_tag(self._tagset, tagset, t) for t in pos_tags] chunk_tags = self._get_column(grid, self._colmap["chunk"]) stack = [Tree(self._root_label, [])] for (word, pos_tag, chunk_tag) in zip(words, pos_tags, chunk_tags): if chunk_tag == "O": state, chunk_type = "O", "" else: (state, chunk_type) = chunk_tag.split("-") if chunk_types is not None and chunk_type not in chunk_types: state = "O" if state == "I" and chunk_type != stack[-1].label(): state = "B" if state in "BO" and len(stack) == 2: stack.pop() if state == "B": new_chunk = Tree(chunk_type, []) stack[-1].append(new_chunk) stack.append(new_chunk) stack[-1].append((word, pos_tag)) return stack[0] def _get_parsed_sent(self, grid, pos_in_tree, tagset=None): words = self._get_column(grid, self._colmap["words"]) pos_tags = self._get_column(grid, self._colmap["pos"]) if tagset and tagset != self._tagset: pos_tags = [map_tag(self._tagset, tagset, t) for t in pos_tags] parse_tags = self._get_column(grid, self._colmap["tree"]) treestr = "" for (word, pos_tag, parse_tag) in zip(words, pos_tags, parse_tags): if word == "(": word = "-LRB-" if word == ")": word = "-RRB-" if pos_tag == "(": pos_tag = "-LRB-" if pos_tag == ")": pos_tag = "-RRB-" (left, right) = parse_tag.split("*") right = right.count(")") * ")" treestr += f"{left} ({pos_tag} {word}) {right}" try: tree = self._tree_class.fromstring(treestr) except (ValueError, IndexError): tree = self._tree_class.fromstring(f"({self._root_label} {treestr})") if not pos_in_tree: for subtree in tree.subtrees(): for i, child in enumerate(subtree): if ( isinstance(child, Tree) and len(child) == 1 and isinstance(child[0], str) ): subtree[i] = (child[0], child.label()) return tree def _get_srl_spans(self, grid): if self._srl_includes_roleset: predicates = self._get_column(grid, self._colmap["srl"] + 1) start_col = self._colmap["srl"] + 2 else: predicates = self._get_column(grid, self._colmap["srl"]) start_col = self._colmap["srl"] + 1 num_preds = len([p for p in predicates if p != "-"]) spanlists = [] for i in range(num_preds): col = self._get_column(grid, start_col + i) spanlist = [] stack = [] for wordnum, srl_tag in enumerate(col): (left, right) = srl_tag.split("*") for tag in left.split("("): if tag: stack.append((tag, wordnum)) for i in range(right.count(")")): (tag, start) = stack.pop() spanlist.append(((start, wordnum + 1), tag)) spanlists.append(spanlist) return spanlists def _get_srl_instances(self, grid, pos_in_tree): tree = self._get_parsed_sent(grid, pos_in_tree) spanlists = self._get_srl_spans(grid) if self._srl_includes_roleset: predicates = self._get_column(grid, self._colmap["srl"] + 1) rolesets = self._get_column(grid, self._colmap["srl"]) else: predicates = self._get_column(grid, self._colmap["srl"]) rolesets = [None] * len(predicates) instances = ConllSRLInstanceList(tree) for wordnum, predicate in enumerate(predicates): if predicate == "-": continue for spanlist in spanlists: for (start, end), tag in spanlist: if wordnum in range(start, end) and tag in ("V", "C-V"): break else: continue break else: raise ValueError("No srl column found for %r" % predicate) instances.append( ConllSRLInstance(tree, wordnum, predicate, rolesets[wordnum], spanlist) ) return instances def _require(self, *columntypes): for columntype in columntypes: if columntype not in self._colmap: raise ValueError( "This corpus does not contain a %s " "column." % columntype ) @staticmethod def _get_column(grid, column_index): return [grid[i][column_index] for i in range(len(grid))] class ConllSRLInstance: def __init__(self, tree, verb_head, verb_stem, roleset, tagged_spans): self.verb = [] self.verb_head = verb_head self.verb_stem = verb_stem self.roleset = roleset self.arguments = [] self.tagged_spans = tagged_spans self.tree = tree self.words = tree.leaves() for (start, end), tag in tagged_spans: if tag in ("V", "C-V"): self.verb += list(range(start, end)) else: self.arguments.append(((start, end), tag)) def __repr__(self): plural = "s" if len(self.arguments) != 1 else "" return "<ConllSRLInstance for %r with %d argument%s>" % ( (self.verb_stem, len(self.arguments), plural) ) def pprint(self): verbstr = " ".join(self.words[i][0] for i in self.verb) hdr = f"SRL for {verbstr!r} (stem={self.verb_stem!r}):\n" s = "" for i, word in enumerate(self.words): if isinstance(word, tuple): word = word[0] for (start, end), argid in self.arguments: if i == start: s += "[%s " % argid if i == end: s += "] " if i in self.verb: word = "<<%s>>" % word s += word + " " return hdr + textwrap.fill( s.replace(" ]", "]"), initial_indent=" ", subsequent_indent=" " ) class ConllSRLInstanceList(list): def __init__(self, tree, instances=()): self.tree = tree list.__init__(self, instances) def __str__(self): return self.pprint() def pprint(self, include_tree=False): for inst in self: if inst.tree != self.tree: raise ValueError("Tree mismatch!") if include_tree: words = self.tree.leaves() pos = [None] * len(words) synt = ["*"] * len(words) self._tree2conll(self.tree, 0, words, pos, synt) s = "" for i in range(len(words)): if include_tree: s += "%-20s " % words[i] s += "%-8s " % pos[i] s += "%15s*%-8s " % tuple(synt[i].split("*")) for inst in self: if i == inst.verb_head: s += "%-20s " % inst.verb_stem break else: s += "%-20s " % "-" for inst in self: argstr = "*" for (start, end), argid in inst.tagged_spans: if i == start: argstr = f"({argid}{argstr}" if i == (end - 1): argstr += ")" s += "%-12s " % argstr s += "\n" return s def _tree2conll(self, tree, wordnum, words, pos, synt): assert isinstance(tree, Tree) if len(tree) == 1 and isinstance(tree[0], str): pos[wordnum] = tree.label() assert words[wordnum] == tree[0] return wordnum + 1 elif len(tree) == 1 and isinstance(tree[0], tuple): assert len(tree[0]) == 2 pos[wordnum], pos[wordnum] = tree[0] return wordnum + 1 else: synt[wordnum] = f"({tree.label()}{synt[wordnum]}" for child in tree: wordnum = self._tree2conll(child, wordnum, words, pos, synt) synt[wordnum - 1] += ")" return wordnum class ConllChunkCorpusReader(ConllCorpusReader): def __init__( self, root, fileids, chunk_types, encoding="utf8", tagset=None, separator=None ): ConllCorpusReader.__init__( self, root, fileids, ("words", "pos", "chunk"), chunk_types=chunk_types, encoding=encoding, tagset=tagset, separator=separator, )
natural language toolkit an crubadan ngrams reader c 20012023 nltk project avital pekker avital pekkerutoronto ca url https www nltk org for license information see license txt an nltk interface for the ngram statistics gathered from the corpora for each language using an crubadan there are multiple potential applications for the data but this reader was created with the goal of using it in the context of language identification for details about an crubadan this data and its potential uses see http borel slu educrubadanindex html a corpus reader used to access language an crubadan ngram files return ngram freqdist for a specific language given iso 6393 language code if lang not in self alllangfreq self alllangfreqlang self loadlangngramslang return self alllangfreqlang def langsself return internal crubadan code based on iso 6393 code for i in self langmappingdata if i1 lower lang lower return i0 def crubadantoisoself lang load language mappings between codes and description from table txt if isinstanceself root zipfilepathpointer raise runtimeerror please install the crubadan corpus first use nltk download mapperfile path joinself root self langmapperfile if self langmapperfile not in self fileids raise runtimeerrorcould not find language mapper file mapperfile with openmapperfile encodingutf8 as raw stripraw raw read strip self langmappingdata row splitt for row in stripraw splitn def loadlangngramsself lang natural language toolkit an crubadan n grams reader c 2001 2023 nltk project avital pekker avital pekker utoronto ca url https www nltk org for license information see license txt an nltk interface for the n gram statistics gathered from the corpora for each language using an crubadan there are multiple potential applications for the data but this reader was created with the goal of using it in the context of language identification for details about an crubadan this data and its potential uses see http borel slu edu crubadan index html a corpus reader used to access language an crubadan n gram files return n gram freqdist for a specific language given iso 639 3 language code return a list of supported languages as iso 639 3 codes return internal crubadan code based on iso 639 3 code return iso 639 3 code given internal crubadan code load language mappings between codes and description from table txt load single n gram language file given the iso 639 3 language code and return its freqdist
import re from os import path from nltk.corpus.reader import CorpusReader from nltk.data import ZipFilePathPointer from nltk.probability import FreqDist class CrubadanCorpusReader(CorpusReader): _LANG_MAPPER_FILE = "table.txt" _all_lang_freq = {} def __init__(self, root, fileids, encoding="utf8", tagset=None): super().__init__(root, fileids, encoding="utf8") self._lang_mapping_data = [] self._load_lang_mapping_data() def lang_freq(self, lang): if lang not in self._all_lang_freq: self._all_lang_freq[lang] = self._load_lang_ngrams(lang) return self._all_lang_freq[lang] def langs(self): return [row[1] for row in self._lang_mapping_data] def iso_to_crubadan(self, lang): for i in self._lang_mapping_data: if i[1].lower() == lang.lower(): return i[0] def crubadan_to_iso(self, lang): for i in self._lang_mapping_data: if i[0].lower() == lang.lower(): return i[1] def _load_lang_mapping_data(self): if isinstance(self.root, ZipFilePathPointer): raise RuntimeError( "Please install the 'crubadan' corpus first, use nltk.download()" ) mapper_file = path.join(self.root, self._LANG_MAPPER_FILE) if self._LANG_MAPPER_FILE not in self.fileids(): raise RuntimeError("Could not find language mapper file: " + mapper_file) with open(mapper_file, encoding="utf-8") as raw: strip_raw = raw.read().strip() self._lang_mapping_data = [row.split("\t") for row in strip_raw.split("\n")] def _load_lang_ngrams(self, lang): if lang not in self.langs(): raise RuntimeError("Unsupported language.") crubadan_code = self.iso_to_crubadan(lang) ngram_file = path.join(self.root, crubadan_code + "-3grams.txt") if not path.isfile(ngram_file): raise RuntimeError("No N-gram file found for requested language.") counts = FreqDist() with open(ngram_file, encoding="utf-8") as f: for line in f: data = line.split(" ") ngram = data[1].strip("\n") freq = int(data[0]) counts[ngram] = freq return counts
natural language toolkit dependency corpus reader c 20012023 nltk project kepa sarasola kepa sarasolaehu es iker manterola returntothehangarhotmail com url https www nltk org for license information see license txt read the next sentence strip off the docstart marker if present extract word and tag from any of the formats discard tags if they weren t requested return the result natural language toolkit dependency corpus reader c 2001 2023 nltk project kepa sarasola kepa sarasola ehu es iker manterola returntothehangar hotmail com url https www nltk org for license information see license txt dokumentu hasiera definitzen da read the next sentence strip off the docstart marker if present extract word and tag from any of the formats discard tags if they weren t requested return the result
from nltk.corpus.reader.api import * from nltk.corpus.reader.util import * from nltk.parse import DependencyGraph from nltk.tokenize import * class DependencyCorpusReader(SyntaxCorpusReader): def __init__( self, root, fileids, encoding="utf8", word_tokenizer=TabTokenizer(), sent_tokenizer=RegexpTokenizer("\n", gaps=True), para_block_reader=read_blankline_block, ): SyntaxCorpusReader.__init__(self, root, fileids, encoding) def words(self, fileids=None): return concat( [ DependencyCorpusView(fileid, False, False, False, encoding=enc) for fileid, enc in self.abspaths(fileids, include_encoding=True) ] ) def tagged_words(self, fileids=None): return concat( [ DependencyCorpusView(fileid, True, False, False, encoding=enc) for fileid, enc in self.abspaths(fileids, include_encoding=True) ] ) def sents(self, fileids=None): return concat( [ DependencyCorpusView(fileid, False, True, False, encoding=enc) for fileid, enc in self.abspaths(fileids, include_encoding=True) ] ) def tagged_sents(self, fileids=None): return concat( [ DependencyCorpusView(fileid, True, True, False, encoding=enc) for fileid, enc in self.abspaths(fileids, include_encoding=True) ] ) def parsed_sents(self, fileids=None): sents = concat( [ DependencyCorpusView(fileid, False, True, True, encoding=enc) for fileid, enc in self.abspaths(fileids, include_encoding=True) ] ) return [DependencyGraph(sent) for sent in sents] class DependencyCorpusView(StreamBackedCorpusView): _DOCSTART = "-DOCSTART- -DOCSTART- O\n" def __init__( self, corpus_file, tagged, group_by_sent, dependencies, chunk_types=None, encoding="utf8", ): self._tagged = tagged self._dependencies = dependencies self._group_by_sent = group_by_sent self._chunk_types = chunk_types StreamBackedCorpusView.__init__(self, corpus_file, encoding=encoding) def read_block(self, stream): sent = read_blankline_block(stream)[0].strip() if sent.startswith(self._DOCSTART): sent = sent[len(self._DOCSTART) :].lstrip() if not self._dependencies: lines = [line.split("\t") for line in sent.split("\n")] if len(lines[0]) == 3 or len(lines[0]) == 4: sent = [(line[0], line[1]) for line in lines] elif len(lines[0]) == 10: sent = [(line[1], line[4]) for line in lines] else: raise ValueError("Unexpected number of fields in dependency tree file") if not self._tagged: sent = [word for (word, tag) in sent] if self._group_by_sent: return [sent] else: return list(sent)
natural language toolkit ieer corpus reader c 20012023 nltk project steven bird stevenbird1gmail com edward loper edlopergmail com url https www nltk org for license information see license txt corpus reader for the information extraction and entity recognition corpus nist 1999 information extraction entity recognition evaluation https www itl nist goviad894 01testsieerer99er99 htm this corpus contains the newswire development test data for the nist 1999 ieer evaluation the files were taken from the subdirectory ieer99englishdevtestnewswire ref nwt and filenames were shortened the corpus contains the following files apw19980314 apw19980424 apw19980429 nyt19980315 nyt19980403 and nyt19980407 a dictionary whose keys are the names of documents in this corpus and whose values are descriptions of those documents contents a list of all documents in this corpus def docsself fileidsnone return concat streambackedcorpusviewfileid self readblock encodingenc for fileid enc in self abspathsfileids true def parseddocsself fileidsnone return concat streambackedcorpusviewfileid self readparsedblock encodingenc for fileid enc in self abspathsfileids true def readparsedblockself stream todo figure out while empty documents are being returned return self parsedoc for doc in self readblockstream if self parsedoc docno is not none def parseself doc val nltk chunk ieerstr2treedoc rootlabeldocument if isinstanceval dict return ieerdocumentval else return ieerdocumentval def readblockself stream out skip any preamble while true line stream readline if not line break if line strip doc break out appendline read the document while true line stream readline if not line break out appendline if line strip doc break return the document return n joinout natural language toolkit ieer corpus reader c 2001 2023 nltk project steven bird stevenbird1 gmail com edward loper edloper gmail com url https www nltk org for license information see license txt corpus reader for the information extraction and entity recognition corpus nist 1999 information extraction entity recognition evaluation https www itl nist gov iad 894 01 tests ie er er_99 er_99 htm this corpus contains the newswire development test data for the nist 1999 ie er evaluation the files were taken from the subdirectory ie_er_99 english devtest newswire ref nwt and filenames were shortened the corpus contains the following files apw_19980314 apw_19980424 apw_19980429 nyt_19980315 nyt_19980403 and nyt_19980407 a dictionary whose keys are the names of documents in this corpus and whose values are descriptions of those documents contents a list of all documents in this corpus todo figure out while empty documents are being returned skip any preamble read the document return the document
import nltk from nltk.corpus.reader.api import * titles = { "APW_19980314": "Associated Press Weekly, 14 March 1998", "APW_19980424": "Associated Press Weekly, 24 April 1998", "APW_19980429": "Associated Press Weekly, 29 April 1998", "NYT_19980315": "New York Times, 15 March 1998", "NYT_19980403": "New York Times, 3 April 1998", "NYT_19980407": "New York Times, 7 April 1998", } documents = sorted(titles) class IEERDocument: def __init__(self, text, docno=None, doctype=None, date_time=None, headline=""): self.text = text self.docno = docno self.doctype = doctype self.date_time = date_time self.headline = headline def __repr__(self): if self.headline: headline = " ".join(self.headline.leaves()) else: headline = ( " ".join([w for w in self.text.leaves() if w[:1] != "<"][:12]) + "..." ) if self.docno is not None: return f"<IEERDocument {self.docno}: {headline!r}>" else: return "<IEERDocument: %r>" % headline class IEERCorpusReader(CorpusReader): def docs(self, fileids=None): return concat( [ StreamBackedCorpusView(fileid, self._read_block, encoding=enc) for (fileid, enc) in self.abspaths(fileids, True) ] ) def parsed_docs(self, fileids=None): return concat( [ StreamBackedCorpusView(fileid, self._read_parsed_block, encoding=enc) for (fileid, enc) in self.abspaths(fileids, True) ] ) def _read_parsed_block(self, stream): return [ self._parse(doc) for doc in self._read_block(stream) if self._parse(doc).docno is not None ] def _parse(self, doc): val = nltk.chunk.ieerstr2tree(doc, root_label="DOCUMENT") if isinstance(val, dict): return IEERDocument(**val) else: return IEERDocument(val) def _read_block(self, stream): out = [] while True: line = stream.readline() if not line: break if line.strip() == "<DOC>": break out.append(line) while True: line = stream.readline() if not line: break out.append(line) if line.strip() == "</DOC>": break return ["\n".join(out)]
natural language toolkit indian language postagged corpus reader c 20012023 nltk project steven bird stevenbird1gmail com edward loper edlopergmail com url https www nltk org for license information see license txt indian language postagged corpus collected by a kumaran microsoft research india distributed with permission contents bangla iit kharagpur hindi microsoft research india marathi iit bombay telugu iiit hyderabad list of words one per line blank lines are ignored natural language toolkit indian language pos tagged corpus reader c 2001 2023 nltk project steven bird stevenbird1 gmail com edward loper edloper gmail com url https www nltk org for license information see license txt indian language pos tagged corpus collected by a kumaran microsoft research india distributed with permission contents bangla iit kharagpur hindi microsoft research india marathi iit bombay telugu iiit hyderabad list of words one per line blank lines are ignored
from nltk.corpus.reader.api import * from nltk.corpus.reader.util import * from nltk.tag import map_tag, str2tuple class IndianCorpusReader(CorpusReader): def words(self, fileids=None): return concat( [ IndianCorpusView(fileid, enc, False, False) for (fileid, enc) in self.abspaths(fileids, True) ] ) def tagged_words(self, fileids=None, tagset=None): if tagset and tagset != self._tagset: tag_mapping_function = lambda t: map_tag(self._tagset, tagset, t) else: tag_mapping_function = None return concat( [ IndianCorpusView(fileid, enc, True, False, tag_mapping_function) for (fileid, enc) in self.abspaths(fileids, True) ] ) def sents(self, fileids=None): return concat( [ IndianCorpusView(fileid, enc, False, True) for (fileid, enc) in self.abspaths(fileids, True) ] ) def tagged_sents(self, fileids=None, tagset=None): if tagset and tagset != self._tagset: tag_mapping_function = lambda t: map_tag(self._tagset, tagset, t) else: tag_mapping_function = None return concat( [ IndianCorpusView(fileid, enc, True, True, tag_mapping_function) for (fileid, enc) in self.abspaths(fileids, True) ] ) class IndianCorpusView(StreamBackedCorpusView): def __init__( self, corpus_file, encoding, tagged, group_by_sent, tag_mapping_function=None ): self._tagged = tagged self._group_by_sent = group_by_sent self._tag_mapping_function = tag_mapping_function StreamBackedCorpusView.__init__(self, corpus_file, encoding=encoding) def read_block(self, stream): line = stream.readline() if line.startswith("<"): return [] sent = [str2tuple(word, sep="_") for word in line.split()] if self._tag_mapping_function: sent = [(w, self._tag_mapping_function(t)) for (w, t) in sent] if not self._tagged: sent = [w for (w, t) in sent] if self._group_by_sent: return [sent] else: return sent
natural language toolkit ipi pan corpus reader c 20012023 nltk project konrad goluchowski kodiemimuw edu pl url https www nltk org for license information see license txt corpus reader designed to work with corpus ipi pan see http korpus plen for more details about ipi pan corpus the corpus includes information about text domain channel and categories you can access possible values using domains channels and categories you can use also this metadata to filter files e g fileidschannel prasa fileidscategories publicystyczny the reader supports methods words sents paras and their tagged versions you can get part of speech instead of full tag by giving simplifytagstrue parameter e g taggedsentssimplifytagstrue also you can get all tags disambiguated tags specifying parameter onetagfalse e g taggedparasonetagfalse you can get all tags that were assigned by a morphological analyzer specifying parameter disambonlyfalse e g taggedwordsdisambonlyfalse the ipipan corpus contains tags indicating if there is a space between two tokens to add special no space markers you should specify parameter appendnospacetrue e g taggedwordsappendnospacetrue as a result in place where there should be no space between two tokens new pair nospace will be inserted for tagged data and just for methods without tags the corpus reader can also try to append spaces between words to enable this option specify parameter appendspacetrue e g wordsappendspacetrue as a result either or space will be inserted between tokens by default xml entities like quot and amp are replaced by corresponding characters you can turn off this feature specifying parameter replacexmlentitiesfalse e g wordsreplacexmlentitiesfalse we may have only part of last line natural language toolkit ipi pan corpus reader c 2001 2023 nltk project konrad goluchowski kodie mimuw edu pl url https www nltk org for license information see license txt corpus reader designed to work with corpus ipi pan see http korpus pl en for more details about ipi pan corpus the corpus includes information about text domain channel and categories you can access possible values using domains channels and categories you can use also this metadata to filter files e g fileids channel prasa fileids categories publicystyczny the reader supports methods words sents paras and their tagged versions you can get part of speech instead of full tag by giving simplify_tags true parameter e g tagged_sents simplify_tags true also you can get all tags disambiguated tags specifying parameter one_tag false e g tagged_paras one_tag false you can get all tags that were assigned by a morphological analyzer specifying parameter disamb_only false e g tagged_words disamb_only false the ipipan corpus contains tags indicating if there is a space between two tokens to add special no space markers you should specify parameter append_no_space true e g tagged_words append_no_space true as a result in place where there should be no space between two tokens new pair no space will be inserted for tagged data and just for methods without tags the corpus reader can also try to append spaces between words to enable this option specify parameter append_space true e g words append_space true as a result either or space will be inserted between tokens by default xml entities like quot and amp are replaced by corresponding characters you can turn off this feature specifying parameter replace_xmlentities false e g words replace_xmlentities false we may have only part of last line
import functools from nltk.corpus.reader.api import CorpusReader from nltk.corpus.reader.util import StreamBackedCorpusView, concat def _parse_args(fun): @functools.wraps(fun) def decorator(self, fileids=None, **kwargs): kwargs.pop("tags", None) if not fileids: fileids = self.fileids() return fun(self, fileids, **kwargs) return decorator class IPIPANCorpusReader(CorpusReader): def __init__(self, root, fileids): CorpusReader.__init__(self, root, fileids, None, None) def channels(self, fileids=None): if not fileids: fileids = self.fileids() return self._parse_header(fileids, "channel") def domains(self, fileids=None): if not fileids: fileids = self.fileids() return self._parse_header(fileids, "domain") def categories(self, fileids=None): if not fileids: fileids = self.fileids() return [ self._map_category(cat) for cat in self._parse_header(fileids, "keyTerm") ] def fileids(self, channels=None, domains=None, categories=None): if channels is not None and domains is not None and categories is not None: raise ValueError( "You can specify only one of channels, domains " "and categories parameter at once" ) if channels is None and domains is None and categories is None: return CorpusReader.fileids(self) if isinstance(channels, str): channels = [channels] if isinstance(domains, str): domains = [domains] if isinstance(categories, str): categories = [categories] if channels: return self._list_morph_files_by("channel", channels) elif domains: return self._list_morph_files_by("domain", domains) else: return self._list_morph_files_by( "keyTerm", categories, map=self._map_category ) @_parse_args def sents(self, fileids=None, **kwargs): return concat( [ self._view( fileid, mode=IPIPANCorpusView.SENTS_MODE, tags=False, **kwargs ) for fileid in self._list_morph_files(fileids) ] ) @_parse_args def paras(self, fileids=None, **kwargs): return concat( [ self._view( fileid, mode=IPIPANCorpusView.PARAS_MODE, tags=False, **kwargs ) for fileid in self._list_morph_files(fileids) ] ) @_parse_args def words(self, fileids=None, **kwargs): return concat( [ self._view(fileid, tags=False, **kwargs) for fileid in self._list_morph_files(fileids) ] ) @_parse_args def tagged_sents(self, fileids=None, **kwargs): return concat( [ self._view(fileid, mode=IPIPANCorpusView.SENTS_MODE, **kwargs) for fileid in self._list_morph_files(fileids) ] ) @_parse_args def tagged_paras(self, fileids=None, **kwargs): return concat( [ self._view(fileid, mode=IPIPANCorpusView.PARAS_MODE, **kwargs) for fileid in self._list_morph_files(fileids) ] ) @_parse_args def tagged_words(self, fileids=None, **kwargs): return concat( [self._view(fileid, **kwargs) for fileid in self._list_morph_files(fileids)] ) def _list_morph_files(self, fileids): return [f for f in self.abspaths(fileids)] def _list_header_files(self, fileids): return [ f.replace("morph.xml", "header.xml") for f in self._list_morph_files(fileids) ] def _parse_header(self, fileids, tag): values = set() for f in self._list_header_files(fileids): values_list = self._get_tag(f, tag) for v in values_list: values.add(v) return list(values) def _list_morph_files_by(self, tag, values, map=None): fileids = self.fileids() ret_fileids = set() for f in fileids: fp = self.abspath(f).replace("morph.xml", "header.xml") values_list = self._get_tag(fp, tag) for value in values_list: if map is not None: value = map(value) if value in values: ret_fileids.add(f) return list(ret_fileids) def _get_tag(self, f, tag): tags = [] with open(f) as infile: header = infile.read() tag_end = 0 while True: tag_pos = header.find("<" + tag, tag_end) if tag_pos < 0: return tags tag_end = header.find("</" + tag + ">", tag_pos) tags.append(header[tag_pos + len(tag) + 2 : tag_end]) def _map_category(self, cat): pos = cat.find(">") if pos == -1: return cat else: return cat[pos + 1 :] def _view(self, filename, **kwargs): tags = kwargs.pop("tags", True) mode = kwargs.pop("mode", 0) simplify_tags = kwargs.pop("simplify_tags", False) one_tag = kwargs.pop("one_tag", True) disamb_only = kwargs.pop("disamb_only", True) append_no_space = kwargs.pop("append_no_space", False) append_space = kwargs.pop("append_space", False) replace_xmlentities = kwargs.pop("replace_xmlentities", True) if len(kwargs) > 0: raise ValueError("Unexpected arguments: %s" % kwargs.keys()) if not one_tag and not disamb_only: raise ValueError( "You cannot specify both one_tag=False and " "disamb_only=False" ) if not tags and (simplify_tags or not one_tag or not disamb_only): raise ValueError( "You cannot specify simplify_tags, one_tag or " "disamb_only with functions other than tagged_*" ) return IPIPANCorpusView( filename, tags=tags, mode=mode, simplify_tags=simplify_tags, one_tag=one_tag, disamb_only=disamb_only, append_no_space=append_no_space, append_space=append_space, replace_xmlentities=replace_xmlentities, ) class IPIPANCorpusView(StreamBackedCorpusView): WORDS_MODE = 0 SENTS_MODE = 1 PARAS_MODE = 2 def __init__(self, filename, startpos=0, **kwargs): StreamBackedCorpusView.__init__(self, filename, None, startpos, None) self.in_sentence = False self.position = 0 self.show_tags = kwargs.pop("tags", True) self.disamb_only = kwargs.pop("disamb_only", True) self.mode = kwargs.pop("mode", IPIPANCorpusView.WORDS_MODE) self.simplify_tags = kwargs.pop("simplify_tags", False) self.one_tag = kwargs.pop("one_tag", True) self.append_no_space = kwargs.pop("append_no_space", False) self.append_space = kwargs.pop("append_space", False) self.replace_xmlentities = kwargs.pop("replace_xmlentities", True) def read_block(self, stream): sentence = [] sentences = [] space = False no_space = False tags = set() lines = self._read_data(stream) while True: if len(lines) <= 1: self._seek(stream) lines = self._read_data(stream) if lines == [""]: assert not sentences return [] line = lines.pop() self.position += len(line) + 1 if line.startswith('<chunk type="s"'): self.in_sentence = True elif line.startswith('<chunk type="p"'): pass elif line.startswith("<tok"): if self.append_space and space and not no_space: self._append_space(sentence) space = True no_space = False orth = "" tags = set() elif line.startswith("</chunk"): if self.in_sentence: self.in_sentence = False self._seek(stream) if self.mode == self.SENTS_MODE: return [sentence] elif self.mode == self.WORDS_MODE: if self.append_space: self._append_space(sentence) return sentence else: sentences.append(sentence) elif self.mode == self.PARAS_MODE: self._seek(stream) return [sentences] elif line.startswith("<orth"): orth = line[6:-7] if self.replace_xmlentities: orth = orth.replace("&quot;", '"').replace("&amp;", "&") elif line.startswith("<lex"): if not self.disamb_only or line.find("disamb=") != -1: tag = line[line.index("<ctag") + 6 : line.index("</ctag")] tags.add(tag) elif line.startswith("</tok"): if self.show_tags: if self.simplify_tags: tags = [t.split(":")[0] for t in tags] if not self.one_tag or not self.disamb_only: sentence.append((orth, tuple(tags))) else: sentence.append((orth, tags.pop())) else: sentence.append(orth) elif line.startswith("<ns/>"): if self.append_space: no_space = True if self.append_no_space: if self.show_tags: sentence.append(("", "no-space")) else: sentence.append("") elif line.startswith("</cesAna"): pass def _read_data(self, stream): self.position = stream.tell() buff = stream.read(4096) lines = buff.split("\n") lines.reverse() return lines def _seek(self, stream): stream.seek(self.position) def _append_space(self, sentence): if self.show_tags: sentence.append((" ", "space")) else: sentence.append(" ")
natural language toolkit lin s thesaurus c 20012023 nltk project dan blanchard dblanchardets org url https www nltk org for license information see license txt wrapper for the lispformatted thesauruses distributed by dekang lin compiled regular expression for extracting the key from the first line of each thesaurus entry keyre re compiler desc 09 staticmethod def defaultdictfactory initialize the thesaurus param root root directory containing thesaurus lisp files type root cstring param badscore the score to give to words which do not appear in each other s sets of synonyms type badscore cfloat start of entry end of entry lines with pairs of ngrams and scores returns the similarity score for two ngrams param ngram1 first ngram to compare type ngram1 cstring param ngram2 second ngram to compare type ngram2 cstring param fileid thesaurus fileid to search in if none search all fileids type fileid cstring return if fileid is specified just the score for the two ngrams otherwise list of tuples of fileids and scores entries don t contain themselves so make sure similarity between item and itself is 1 0 returns a list of scored synonyms tuples of synonyms and scores for the current ngram param ngram ngram to lookup type ngram cstring param fileid thesaurus fileid to search in if none search all fileids type fileid cstring return if fileid is specified list of tuples of scores and synonyms otherwise list of tuples of fileids and lists where inner lists consist of tuples of scores and synonyms returns a list of synonyms for the current ngram param ngram ngram to lookup type ngram cstring param fileid thesaurus fileid to search in if none search all fileids type fileid cstring return if fileid is specified list of synonyms otherwise list of tuples of fileids and lists where inner lists contain synonyms determines whether or not the given ngram is in the thesaurus param ngram ngram to lookup type ngram cstring return whether the given ngram is in the thesaurus demo natural language toolkit lin s thesaurus c 2001 2023 nltk project dan blanchard dblanchard ets org url https www nltk org for license information see license txt wrapper for the lisp formatted thesauruses distributed by dekang lin compiled regular expression for extracting the key from the first line of each thesaurus entry factory for creating defaultdict of defaultdict dict s initialize the thesaurus param root root directory containing thesaurus lisp files type root c string param badscore the score to give to words which do not appear in each other s sets of synonyms type badscore c float start of entry end of entry lines with pairs of ngrams and scores returns the similarity score for two ngrams param ngram1 first ngram to compare type ngram1 c string param ngram2 second ngram to compare type ngram2 c string param fileid thesaurus fileid to search in if none search all fileids type fileid c string return if fileid is specified just the score for the two ngrams otherwise list of tuples of fileids and scores entries don t contain themselves so make sure similarity between item and itself is 1 0 returns a list of scored synonyms tuples of synonyms and scores for the current ngram param ngram ngram to lookup type ngram c string param fileid thesaurus fileid to search in if none search all fileids type fileid c string return if fileid is specified list of tuples of scores and synonyms otherwise list of tuples of fileids and lists where inner lists consist of tuples of scores and synonyms returns a list of synonyms for the current ngram param ngram ngram to lookup type ngram c string param fileid thesaurus fileid to search in if none search all fileids type fileid c string return if fileid is specified list of synonyms otherwise list of tuples of fileids and lists where inner lists contain synonyms determines whether or not the given ngram is in the thesaurus param ngram ngram to lookup type ngram c string return whether the given ngram is in the thesaurus demo
import re from collections import defaultdict from functools import reduce from nltk.corpus.reader import CorpusReader class LinThesaurusCorpusReader(CorpusReader): _key_re = re.compile(r'\("?([^"]+)"? \(desc [0-9.]+\).+') @staticmethod def __defaultdict_factory(): return defaultdict(dict) def __init__(self, root, badscore=0.0): super().__init__(root, r"sim[A-Z]\.lsp") self._thesaurus = defaultdict(LinThesaurusCorpusReader.__defaultdict_factory) self._badscore = badscore for path, encoding, fileid in self.abspaths( include_encoding=True, include_fileid=True ): with open(path) as lin_file: first = True for line in lin_file: line = line.strip() if first: key = LinThesaurusCorpusReader._key_re.sub(r"\1", line) first = False elif line == "))": first = True else: split_line = line.split("\t") if len(split_line) == 2: ngram, score = split_line self._thesaurus[fileid][key][ngram.strip('"')] = float( score ) def similarity(self, ngram1, ngram2, fileid=None): if ngram1 == ngram2: if fileid: return 1.0 else: return [(fid, 1.0) for fid in self._fileids] else: if fileid: return ( self._thesaurus[fileid][ngram1][ngram2] if ngram2 in self._thesaurus[fileid][ngram1] else self._badscore ) else: return [ ( fid, ( self._thesaurus[fid][ngram1][ngram2] if ngram2 in self._thesaurus[fid][ngram1] else self._badscore ), ) for fid in self._fileids ] def scored_synonyms(self, ngram, fileid=None): if fileid: return self._thesaurus[fileid][ngram].items() else: return [ (fileid, self._thesaurus[fileid][ngram].items()) for fileid in self._fileids ] def synonyms(self, ngram, fileid=None): if fileid: return self._thesaurus[fileid][ngram].keys() else: return [ (fileid, self._thesaurus[fileid][ngram].keys()) for fileid in self._fileids ] def __contains__(self, ngram): return reduce( lambda accum, fileid: accum or (ngram in self._thesaurus[fileid]), self._fileids, False, ) def demo(): from nltk.corpus import lin_thesaurus as thes word1 = "business" word2 = "enterprise" print("Getting synonyms for " + word1) print(thes.synonyms(word1)) print("Getting scored synonyms for " + word1) print(thes.scored_synonyms(word1)) print("Getting synonyms from simN.lsp (noun subsection) for " + word1) print(thes.synonyms(word1, fileid="simN.lsp")) print("Getting synonyms from simN.lsp (noun subsection) for " + word1) print(thes.synonyms(word1, fileid="simN.lsp")) print(f"Similarity score for {word1} and {word2}:") print(thes.similarity(word1, word2)) if __name__ == "__main__": demo()
natural language toolkit nkjp corpus reader c 20012023 nltk project gabriela kaczka url https www nltk org for license information see license txt wraps function arguments if fileids not specified then function set nkjpcorpusreader paths corpus reader designed to work with national corpus of polish see http nkjp pl for more details about nkjp use example import nltk import nkjp from nkjp import nkjpcorpusreader x nkjpcorpusreaderroot homeusernltkdatacorporankjp fileids obtain the whole corpus x header x raw x words x taggedwordstags subst comp link to find more tags nkjp plpoliqarphelpense2 html x sents x nkjpcorpusreaderroot homeusernltkdatacorporankjp fileids wilk obtain particular files x headerfileids wilkdom homeusernltkdatacorporankjpwilkwilczy x taggedwordsfileids wilkdom homeusernltkdatacorporankjpwilkwilczy tags subst comp returns a list of file identifiers for the fileids that make up this corpus returns a view specialised for use with particular corpus file add root if necessary to specified fileid returns headers of specified fileids returns sentences in specified fileids returns words in specified fileids call with specified tags as a list e g tags subst comp returns tagged words in specified fileids returns words in specified fileids headermode a stream backed corpus view specialized for use with header xml files in nkjp corpus helper class creating xml file to one without references to nkjp namespace that s needed because the xmlcorpusview assumes that one can find short substrings of xml that are valid xml which is not true if a namespace is declared at top level a stream backed corpus view specialized for use with annsegmentation xml files in nkjp corpus intersperse nkjpcorpustextview xml preprocessing base class init returns index of beginning letter in sentence returns index of end letter in sentence returns one sentence get increasing sequence of ids in case of choice get first possibility a stream backed corpus view specialized for use with text xml files in nkjp corpus xml preprocessing base class init returns text as a list of sentences fill dictionary to use later in sents mode a stream backed corpus view specialized for use with annmorphosyntax xml files in nkjp corpus if tags not specified then always return word get word natural language toolkit nkjp corpus reader c 2001 2023 nltk project gabriela kaczka url https www nltk org for license information see license txt wraps function arguments if fileids not specified then function set nkjpcorpusreader paths corpus reader designed to work with national corpus of polish see http nkjp pl for more details about nkjp use example import nltk import nkjp from nkjp import nkjpcorpusreader x nkjpcorpusreader root home user nltk_data corpora nkjp fileids obtain the whole corpus x header x raw x words x tagged_words tags subst comp link to find more tags nkjp pl poliqarp help ense2 html x sents x nkjpcorpusreader root home user nltk_data corpora nkjp fileids wilk obtain particular file s x header fileids wilkdom home user nltk_data corpora nkjp wilkwilczy x tagged_words fileids wilkdom home user nltk_data corpora nkjp wilkwilczy tags subst comp returns a list of file identifiers for the fileids that make up this corpus returns a view specialised for use with particular corpus file add root if necessary to specified fileid returns header s of specified fileids returns sentences in specified fileids returns words in specified fileids call with specified tags as a list e g tags subst comp returns tagged words in specified fileids returns words in specified fileids header_mode a stream backed corpus view specialized for use with header xml files in nkjp corpus helper class creating xml file to one without references to nkjp namespace that s needed because the xmlcorpusview assumes that one can find short substrings of xml that are valid xml which is not true if a namespace is declared at top level in all files in ann_segmentation xml in ann_segmentation xml in ann_segmentation xml in ann_segmentation xml a stream backed corpus view specialized for use with ann_segmentation xml files in nkjp corpus intersperse nkjpcorpus_text_view xml preprocessing base class init returns index of beginning letter in sentence returns index of end letter in sentence returns one sentence text segment get increasing sequence of ids in case of choice get first possibility a stream backed corpus view specialized for use with text xml files in nkjp corpus xml preprocessing base class init returns text as a list of sentences fill dictionary to use later in sents mode a stream backed corpus view specialized for use with ann_morphosyntax xml files in nkjp corpus if tags not specified then always return word get word
import functools import os import re import tempfile from nltk.corpus.reader.util import concat from nltk.corpus.reader.xmldocs import XMLCorpusReader, XMLCorpusView def _parse_args(fun): @functools.wraps(fun) def decorator(self, fileids=None, **kwargs): if not fileids: fileids = self._paths return fun(self, fileids, **kwargs) return decorator class NKJPCorpusReader(XMLCorpusReader): WORDS_MODE = 0 SENTS_MODE = 1 HEADER_MODE = 2 RAW_MODE = 3 def __init__(self, root, fileids=".*"): if isinstance(fileids, str): XMLCorpusReader.__init__(self, root, fileids + ".*/header.xml") else: XMLCorpusReader.__init__( self, root, [fileid + "/header.xml" for fileid in fileids] ) self._paths = self.get_paths() def get_paths(self): return [ os.path.join(str(self._root), f.split("header.xml")[0]) for f in self._fileids ] def fileids(self): return [f.split("header.xml")[0] for f in self._fileids] def _view(self, filename, tags=None, **kwargs): mode = kwargs.pop("mode", NKJPCorpusReader.WORDS_MODE) if mode is NKJPCorpusReader.WORDS_MODE: return NKJPCorpus_Morph_View(filename, tags=tags) elif mode is NKJPCorpusReader.SENTS_MODE: return NKJPCorpus_Segmentation_View(filename, tags=tags) elif mode is NKJPCorpusReader.HEADER_MODE: return NKJPCorpus_Header_View(filename, tags=tags) elif mode is NKJPCorpusReader.RAW_MODE: return NKJPCorpus_Text_View( filename, tags=tags, mode=NKJPCorpus_Text_View.RAW_MODE ) else: raise NameError("No such mode!") def add_root(self, fileid): if self.root in fileid: return fileid return self.root + fileid @_parse_args def header(self, fileids=None, **kwargs): return concat( [ self._view( self.add_root(fileid), mode=NKJPCorpusReader.HEADER_MODE, **kwargs ).handle_query() for fileid in fileids ] ) @_parse_args def sents(self, fileids=None, **kwargs): return concat( [ self._view( self.add_root(fileid), mode=NKJPCorpusReader.SENTS_MODE, **kwargs ).handle_query() for fileid in fileids ] ) @_parse_args def words(self, fileids=None, **kwargs): return concat( [ self._view( self.add_root(fileid), mode=NKJPCorpusReader.WORDS_MODE, **kwargs ).handle_query() for fileid in fileids ] ) @_parse_args def tagged_words(self, fileids=None, **kwargs): tags = kwargs.pop("tags", []) return concat( [ self._view( self.add_root(fileid), mode=NKJPCorpusReader.WORDS_MODE, tags=tags, **kwargs ).handle_query() for fileid in fileids ] ) @_parse_args def raw(self, fileids=None, **kwargs): return concat( [ self._view( self.add_root(fileid), mode=NKJPCorpusReader.RAW_MODE, **kwargs ).handle_query() for fileid in fileids ] ) class NKJPCorpus_Header_View(XMLCorpusView): def __init__(self, filename, **kwargs): self.tagspec = ".*/sourceDesc$" XMLCorpusView.__init__(self, filename + "header.xml", self.tagspec) def handle_query(self): self._open() header = [] while True: segm = XMLCorpusView.read_block(self, self._stream) if len(segm) == 0: break header.extend(segm) self.close() return header def handle_elt(self, elt, context): titles = elt.findall("bibl/title") title = [] if titles: title = "\n".join(title.text.strip() for title in titles) authors = elt.findall("bibl/author") author = [] if authors: author = "\n".join(author.text.strip() for author in authors) dates = elt.findall("bibl/date") date = [] if dates: date = "\n".join(date.text.strip() for date in dates) publishers = elt.findall("bibl/publisher") publisher = [] if publishers: publisher = "\n".join(publisher.text.strip() for publisher in publishers) idnos = elt.findall("bibl/idno") idno = [] if idnos: idno = "\n".join(idno.text.strip() for idno in idnos) notes = elt.findall("bibl/note") note = [] if notes: note = "\n".join(note.text.strip() for note in notes) return { "title": title, "author": author, "date": date, "publisher": publisher, "idno": idno, "note": note, } class XML_Tool: def __init__(self, root, filename): self.read_file = os.path.join(root, filename) self.write_file = tempfile.NamedTemporaryFile(delete=False) def build_preprocessed_file(self): try: fr = open(self.read_file) fw = self.write_file line = " " while len(line): line = fr.readline() x = re.split(r"nkjp:[^ ]* ", line) ret = " ".join(x) x = re.split("<nkjp:paren>", ret) ret = " ".join(x) x = re.split("</nkjp:paren>", ret) ret = " ".join(x) x = re.split("<choice>", ret) ret = " ".join(x) x = re.split("</choice>", ret) ret = " ".join(x) fw.write(ret) fr.close() fw.close() return self.write_file.name except Exception as e: self.remove_preprocessed_file() raise Exception from e def remove_preprocessed_file(self): os.remove(self.write_file.name) class NKJPCorpus_Segmentation_View(XMLCorpusView): def __init__(self, filename, **kwargs): self.tagspec = ".*p/.*s" self.text_view = NKJPCorpus_Text_View( filename, mode=NKJPCorpus_Text_View.SENTS_MODE ) self.text_view.handle_query() self.xml_tool = XML_Tool(filename, "ann_segmentation.xml") XMLCorpusView.__init__( self, self.xml_tool.build_preprocessed_file(), self.tagspec ) def get_segm_id(self, example_word): return example_word.split("(")[1].split(",")[0] def get_sent_beg(self, beg_word): return int(beg_word.split(",")[1]) def get_sent_end(self, end_word): splitted = end_word.split(")")[0].split(",") return int(splitted[1]) + int(splitted[2]) def get_sentences(self, sent_segm): id = self.get_segm_id(sent_segm[0]) segm = self.text_view.segm_dict[id] beg = self.get_sent_beg(sent_segm[0]) end = self.get_sent_end(sent_segm[len(sent_segm) - 1]) return segm[beg:end] def remove_choice(self, segm): ret = [] prev_txt_end = -1 prev_txt_nr = -1 for word in segm: txt_nr = self.get_segm_id(word) if self.get_sent_beg(word) > prev_txt_end - 1 or prev_txt_nr != txt_nr: ret.append(word) prev_txt_end = self.get_sent_end(word) prev_txt_nr = txt_nr return ret def handle_query(self): try: self._open() sentences = [] while True: sent_segm = XMLCorpusView.read_block(self, self._stream) if len(sent_segm) == 0: break for segm in sent_segm: segm = self.remove_choice(segm) sentences.append(self.get_sentences(segm)) self.close() self.xml_tool.remove_preprocessed_file() return sentences except Exception as e: self.xml_tool.remove_preprocessed_file() raise Exception from e def handle_elt(self, elt, context): ret = [] for seg in elt: ret.append(seg.get("corresp")) return ret class NKJPCorpus_Text_View(XMLCorpusView): SENTS_MODE = 0 RAW_MODE = 1 def __init__(self, filename, **kwargs): self.mode = kwargs.pop("mode", 0) self.tagspec = ".*/div/ab" self.segm_dict = dict() self.xml_tool = XML_Tool(filename, "text.xml") XMLCorpusView.__init__( self, self.xml_tool.build_preprocessed_file(), self.tagspec ) def handle_query(self): try: self._open() x = self.read_block(self._stream) self.close() self.xml_tool.remove_preprocessed_file() return x except Exception as e: self.xml_tool.remove_preprocessed_file() raise Exception from e def read_block(self, stream, tagspec=None, elt_handler=None): txt = [] while True: segm = XMLCorpusView.read_block(self, stream) if len(segm) == 0: break for part in segm: txt.append(part) return [" ".join([segm for segm in txt])] def get_segm_id(self, elt): for attr in elt.attrib: if attr.endswith("id"): return elt.get(attr) def handle_elt(self, elt, context): if self.mode is NKJPCorpus_Text_View.SENTS_MODE: self.segm_dict[self.get_segm_id(elt)] = elt.text return elt.text class NKJPCorpus_Morph_View(XMLCorpusView): def __init__(self, filename, **kwargs): self.tags = kwargs.pop("tags", None) self.tagspec = ".*/seg/fs" self.xml_tool = XML_Tool(filename, "ann_morphosyntax.xml") XMLCorpusView.__init__( self, self.xml_tool.build_preprocessed_file(), self.tagspec ) def handle_query(self): try: self._open() words = [] while True: segm = XMLCorpusView.read_block(self, self._stream) if len(segm) == 0: break for part in segm: if part is not None: words.append(part) self.close() self.xml_tool.remove_preprocessed_file() return words except Exception as e: self.xml_tool.remove_preprocessed_file() raise Exception from e def handle_elt(self, elt, context): word = "" flag = False is_not_interp = True if self.tags is None: flag = True for child in elt: if "name" in child.keys() and child.attrib["name"] == "orth": for symbol in child: if symbol.tag == "string": word = symbol.text elif "name" in child.keys() and child.attrib["name"] == "interps": for symbol in child: if "type" in symbol.keys() and symbol.attrib["type"] == "lex": for symbol2 in symbol: if ( "name" in symbol2.keys() and symbol2.attrib["name"] == "ctag" ): for symbol3 in symbol2: if ( "value" in symbol3.keys() and self.tags is not None and symbol3.attrib["value"] in self.tags ): flag = True elif ( "value" in symbol3.keys() and symbol3.attrib["value"] == "interp" ): is_not_interp = False if flag and is_not_interp: return word
natural language toolkit nps chat corpus reader c 20012023 nltk project edward loper edlopergmail com url https www nltk org for license information see license txt natural language toolkit nps chat corpus reader c 2001 2023 nltk project edward loper edloper gmail com url https www nltk org for license information see license txt
import re import textwrap from nltk.corpus.reader.api import * from nltk.corpus.reader.util import * from nltk.corpus.reader.xmldocs import * from nltk.internals import ElementWrapper from nltk.tag import map_tag from nltk.util import LazyConcatenation class NPSChatCorpusReader(XMLCorpusReader): def __init__(self, root, fileids, wrap_etree=False, tagset=None): XMLCorpusReader.__init__(self, root, fileids, wrap_etree) self._tagset = tagset def xml_posts(self, fileids=None): if self._wrap_etree: return concat( [ XMLCorpusView(fileid, "Session/Posts/Post", self._wrap_elt) for fileid in self.abspaths(fileids) ] ) else: return concat( [ XMLCorpusView(fileid, "Session/Posts/Post") for fileid in self.abspaths(fileids) ] ) def posts(self, fileids=None): return concat( [ XMLCorpusView( fileid, "Session/Posts/Post/terminals", self._elt_to_words ) for fileid in self.abspaths(fileids) ] ) def tagged_posts(self, fileids=None, tagset=None): def reader(elt, handler): return self._elt_to_tagged_words(elt, handler, tagset) return concat( [ XMLCorpusView(fileid, "Session/Posts/Post/terminals", reader) for fileid in self.abspaths(fileids) ] ) def words(self, fileids=None): return LazyConcatenation(self.posts(fileids)) def tagged_words(self, fileids=None, tagset=None): return LazyConcatenation(self.tagged_posts(fileids, tagset)) def _wrap_elt(self, elt, handler): return ElementWrapper(elt) def _elt_to_words(self, elt, handler): return [self._simplify_username(t.attrib["word"]) for t in elt.findall("t")] def _elt_to_tagged_words(self, elt, handler, tagset=None): tagged_post = [ (self._simplify_username(t.attrib["word"]), t.attrib["pos"]) for t in elt.findall("t") ] if tagset and tagset != self._tagset: tagged_post = [ (w, map_tag(self._tagset, tagset, t)) for (w, t) in tagged_post ] return tagged_post @staticmethod def _simplify_username(word): if "User" in word: word = "U" + word.split("User", 1)[1] elif isinstance(word, bytes): word = word.decode("ascii") return word
natural language toolkit opinion lexicon corpus reader c 20012023 nltk project pierpaolo pantone 24alsecondogmail com url https www nltk org for license information see license txt corpusreader for the opinion lexicon opinion lexicon information s minqing hu and bing liu 2004 department of computer science university of illinois at chicago contact bing liu liubcs uic edu https www cs uic eduliub distributed with permission related papers minqing hu and bing liu mining and summarizing customer reviews proceedings of the acm sigkdd international conference on knowledge discovery data mining kdd04 aug 2225 2004 seattle washington usa bing liu minqing hu and junsheng cheng opinion observer analyzing and comparing opinions on the web proceedings of the 14th international world wide web conference www2005 may 1014 2005 chiba japan this corpusview is used to skip the initial readme block of the corpus open self stream skip the readme block set the initial position to the current stream position reader for liu and hu opinion lexicon blank lines and readme are ignored from nltk corpus import opinionlexicon opinionlexicon words 2faced 2faces abnormal abolish the opinionlexiconcorpusreader provides shortcuts to retrieve positivenegative words opinionlexicon negative 2faced 2faces abnormal abolish note that words from words method are sorted by file id not alphabetically opinionlexicon words0 10 doctest normalizewhitespace 2faced 2faces abnormal abolish abominable abominably abominate abomination abort aborted sortedopinionlexicon words0 10 doctest normalizewhitespace 2faced 2faces a abnormal abolish abominable abominably abominate abomination abort return all words in the opinion lexicon note that these words are not sorted in alphabetical order param fileids a list or regexp specifying the ids of the files whose words have to be returned return the given files as a list of words and punctuation symbols rtype liststr return all positive words in alphabetical order return a list of positive words rtype liststr return all negative words in alphabetical order return a list of negative words rtype liststr natural language toolkit opinion lexicon corpus reader c 2001 2023 nltk project pierpaolo pantone 24alsecondo gmail com url https www nltk org for license information see license txt corpusreader for the opinion lexicon opinion lexicon information s minqing hu and bing liu 2004 department of computer science university of illinois at chicago contact bing liu liub cs uic edu https www cs uic edu liub distributed with permission related papers minqing hu and bing liu mining and summarizing customer reviews proceedings of the acm sigkdd international conference on knowledge discovery data mining kdd 04 aug 22 25 2004 seattle washington usa bing liu minqing hu and junsheng cheng opinion observer analyzing and comparing opinions on the web proceedings of the 14th international world wide web conference www 2005 may 10 14 2005 chiba japan this corpusview is used to skip the initial readme block of the corpus open self _stream skip the readme block set the initial position to the current stream position reader for liu and hu opinion lexicon blank lines and readme are ignored from nltk corpus import opinion_lexicon opinion_lexicon words 2 faced 2 faces abnormal abolish the opinionlexiconcorpusreader provides shortcuts to retrieve positive negative words opinion_lexicon negative 2 faced 2 faces abnormal abolish note that words from words method are sorted by file id not alphabetically opinion_lexicon words 0 10 doctest normalize_whitespace 2 faced 2 faces abnormal abolish abominable abominably abominate abomination abort aborted sorted opinion_lexicon words 0 10 doctest normalize_whitespace 2 faced 2 faces a abnormal abolish abominable abominably abominate abomination abort return all words in the opinion lexicon note that these words are not sorted in alphabetical order param fileids a list or regexp specifying the ids of the files whose words have to be returned return the given file s as a list of words and punctuation symbols rtype list str return all positive words in alphabetical order return a list of positive words rtype list str return all negative words in alphabetical order return a list of negative words rtype list str read 20 lines at a time
from nltk.corpus.reader import WordListCorpusReader from nltk.corpus.reader.api import * class IgnoreReadmeCorpusView(StreamBackedCorpusView): def __init__(self, *args, **kwargs): StreamBackedCorpusView.__init__(self, *args, **kwargs) self._open() read_blankline_block(self._stream) self._filepos = [self._stream.tell()] class OpinionLexiconCorpusReader(WordListCorpusReader): CorpusView = IgnoreReadmeCorpusView def words(self, fileids=None): if fileids is None: fileids = self._fileids elif isinstance(fileids, str): fileids = [fileids] return concat( [ self.CorpusView(path, self._read_word_block, encoding=enc) for (path, enc, fileid) in self.abspaths(fileids, True, True) ] ) def positive(self): return self.words("positive-words.txt") def negative(self): return self.words("negative-words.txt") def _read_word_block(self, stream): words = [] for i in range(20): line = stream.readline() if not line: continue words.append(line.strip()) return words