Description
stringlengths
18
161k
Code
stringlengths
15
300k
apriori algorithm is a association rule mining technique also known as market basket analysis aims to discover interesting relationships or associations among a set of items in a transactional or relational database for example apriori algorithm states if a customer buys item a and item b then they are likely to buy item c this rule suggests a relationship between items a b and c indicating that customers who purchased a and b are more likely to also purchase item c wiki https en wikipedia orgwikiapriorialgorithm examples https www kaggle comcodeearthianaprioriassociationrulesmining returns a sample transaction dataset loaddata milk milk butter milk bread milk bread chips prune candidate itemsets that are not frequent the goal of pruning is to filter out candidate itemsets that are not frequent this is done by checking if all the k1 subsets of a candidate itemset are present in the frequent itemsets of the previous iteration valid subsequences of the frequent itemsets from the previous iteration prunes candidate itemsets that are not frequent itemset x y z candidates x y x z y z pruneitemset candidates 2 x y x z y z itemset 1 2 3 4 candidates 1 2 4 pruneitemset candidates 3 returns a list of frequent itemsets and their support counts data a b c a b a c a d b c aprioridata 2 a b 1 a c 2 b c 2 data 1 2 3 1 2 1 3 1 4 2 3 aprioridata 3 count itemset support prune infrequent itemsets append frequent itemsets as a list to maintain order apriori algorithm for finding frequent itemsets args data a list of transactions where each transaction is a list of items minsupport the minimum support threshold for frequent itemsets returns a list of frequent itemsets along with their support counts userdefined threshold or minimum support level returns a sample transaction dataset load_data milk milk butter milk bread milk bread chips prune candidate itemsets that are not frequent the goal of pruning is to filter out candidate itemsets that are not frequent this is done by checking if all the k 1 subsets of a candidate itemset are present in the frequent itemsets of the previous iteration valid subsequences of the frequent itemsets from the previous iteration prunes candidate itemsets that are not frequent itemset x y z candidates x y x z y z prune itemset candidates 2 x y x z y z itemset 1 2 3 4 candidates 1 2 4 prune itemset candidates 3 returns a list of frequent itemsets and their support counts data a b c a b a c a d b c apriori data 2 a b 1 a c 2 b c 2 data 1 2 3 1 2 1 3 1 4 2 3 apriori data 3 count itemset support prune infrequent itemsets append frequent itemsets as a list to maintain order apriori algorithm for finding frequent itemsets args data a list of transactions where each transaction is a list of items min_support the minimum support threshold for frequent itemsets returns a list of frequent itemsets along with their support counts user defined threshold or minimum support level
from itertools import combinations def load_data() -> list[list[str]]: return [["milk"], ["milk", "butter"], ["milk", "bread"], ["milk", "bread", "chips"]] def prune(itemset: list, candidates: list, length: int) -> list: pruned = [] for candidate in candidates: is_subsequence = True for item in candidate: if item not in itemset or itemset.count(item) < length - 1: is_subsequence = False break if is_subsequence: pruned.append(candidate) return pruned def apriori(data: list[list[str]], min_support: int) -> list[tuple[list[str], int]]: itemset = [list(transaction) for transaction in data] frequent_itemsets = [] length = 1 while itemset: counts = [0] * len(itemset) for transaction in data: for j, candidate in enumerate(itemset): if all(item in transaction for item in candidate): counts[j] += 1 itemset = [item for i, item in enumerate(itemset) if counts[i] >= min_support] for i, item in enumerate(itemset): frequent_itemsets.append((sorted(item), counts[i])) length += 1 itemset = prune(itemset, list(combinations(itemset, length)), length) return frequent_itemsets if __name__ == "__main__": import doctest doctest.testmod() frequent_itemsets = apriori(data=load_data(), min_support=2) print("\n".join(f"{itemset}: {support}" for itemset, support in frequent_itemsets))
the a algorithm combines features of uniformcost search and pure heuristic search to efficiently compute optimal solutions the a algorithm is a bestfirst search algorithm in which the cost associated with a node is fn gn hn where gn is the cost of the path from the initial state to node n and hn is the heuristic estimate or the cost or a path from node n to a goal the a algorithm introduces a heuristic into a regular graphsearching algorithm essentially planning ahead at each step so a more optimal decision is made for this reason a is known as an algorithm with brains https en wikipedia orgwikiasearchalgorithm class cell represents a cell in the world which have the properties position represented by tuple of x and y coordinates initially set to 0 0 parent contains the parent cell object visited before we arrived at this cell g h f parameters used when calling our heuristic function overrides equals method because otherwise cell assign will give wrong results gridworld class represents the external world here a grid mm matrix worldsize create a numpy array with the given worldsize default is 5 return the neighbours of cell implementation of a start algorithm world object of the world object start object of the cell as start position stop object of the cell as goal position p gridworld start cell start position 0 0 goal cell goal position 4 4 astarp start goal 0 0 1 1 2 2 3 3 4 4 start position and goal just for visual reasons class cell represents a cell in the world which have the properties position represented by tuple of x and y coordinates initially set to 0 0 parent contains the parent cell object visited before we arrived at this cell g h f parameters used when calling our heuristic function overrides equals method because otherwise cell assign will give wrong results gridworld class represents the external world here a grid m m matrix world_size create a numpy array with the given world_size default is 5 return the neighbours of cell implementation of a start algorithm world object of the world object start object of the cell as start position stop object of the cell as goal position p gridworld start cell start position 0 0 goal cell goal position 4 4 astar p start goal 0 0 1 1 2 2 3 3 4 4 start position and goal just for visual reasons
import numpy as np class Cell: def __init__(self): self.position = (0, 0) self.parent = None self.g = 0 self.h = 0 self.f = 0 def __eq__(self, cell): return self.position == cell.position def showcell(self): print(self.position) class Gridworld: def __init__(self, world_size=(5, 5)): self.w = np.zeros(world_size) self.world_x_limit = world_size[0] self.world_y_limit = world_size[1] def show(self): print(self.w) def get_neigbours(self, cell): neughbour_cord = [ (-1, -1), (-1, 0), (-1, 1), (0, -1), (0, 1), (1, -1), (1, 0), (1, 1), ] current_x = cell.position[0] current_y = cell.position[1] neighbours = [] for n in neughbour_cord: x = current_x + n[0] y = current_y + n[1] if 0 <= x < self.world_x_limit and 0 <= y < self.world_y_limit: c = Cell() c.position = (x, y) c.parent = cell neighbours.append(c) return neighbours def astar(world, start, goal): _open = [] _closed = [] _open.append(start) while _open: min_f = np.argmin([n.f for n in _open]) current = _open[min_f] _closed.append(_open.pop(min_f)) if current == goal: break for n in world.get_neigbours(current): for c in _closed: if c == n: continue n.g = current.g + 1 x1, y1 = n.position x2, y2 = goal.position n.h = (y2 - y1) ** 2 + (x2 - x1) ** 2 n.f = n.h + n.g for c in _open: if c == n and c.f < n.f: continue _open.append(n) path = [] while current.parent is not None: path.append(current.position) current = current.parent path.append(current.position) return path[::-1] if __name__ == "__main__": world = Gridworld() start = Cell() start.position = (0, 0) goal = Cell() goal.position = (4, 4) print(f"path from {start.position} to {goal.position}") s = astar(world, start, goal) for i in s: world.w[i] = 1 print(world.w)
demonstration of the automatic differentiation reverse mode reference https en wikipedia orgwikiautomaticdifferentiation poojan smart email smrtpoojangmail com class represents list of supported operations on variable for gradient calculation class represents ndimensional object which is used to wrap numpy array on which operations will be performed and the gradient will be calculated examples variable5 0 variable5 0 variable5 0 2 9 variable5 2 9 variable5 0 2 9 variable1 0 5 5 variable6 8 4 variable8 0 10 0 variable 8 10 pointers to the operations to which the variable is input pointer to the operation of which the variable is output of if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated class represents operation between single or two variable objects operation objects contains type of operation pointers to input variable objects and pointer to resulting variable from the operation class contains methods to compute partial derivatives of variable based on the computation graph examples with gradienttracker as tracker a variable2 0 5 0 b variable1 0 2 0 m variable1 0 2 0 c a b d a b e c d tracker gradiente a array0 25 0 04 tracker gradiente b array1 0 25 tracker gradiente m is none true with gradienttracker as tracker a variable2 0 5 0 b variable1 0 2 0 c a b tracker gradientc a array1 2 tracker gradientc b array2 5 with gradienttracker as tracker a variable2 0 5 0 b a 3 tracker gradientb a array12 75 executes at the creation of class object and returns if object is already created this class follows singleton design pattern adds operation object to the related variable objects for creating computational graph for calculating gradients args optype operation type params input parameters to the operation output output variable of the operation reverse accumulation of partial derivatives to calculate gradients of target variable with respect to source variable args target target variable for which gradients are calculated source source variable with respect to which the gradients are calculated returns gradient of the source variable with respect to the target variable partial derivatives with respect to target iterating through each operations in the computation graph as per the chain rule multiplying partial derivatives of variables with respect to the target compute the derivative of given operationfunction args param variable to be differentiated operation function performed on the input variable returns derivative of input variable with respect to the output of the operation noqa up035 class represents list of supported operations on variable for gradient calculation class represents n dimensional object which is used to wrap numpy array on which operations will be performed and the gradient will be calculated examples variable 5 0 variable 5 0 variable 5 0 2 9 variable 5 2 9 variable 5 0 2 9 variable 1 0 5 5 variable 6 8 4 variable 8 0 10 0 variable 8 10 pointers to the operations to which the variable is input pointer to the operation of which the variable is output of if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated if tracker is enabled computation graph will be updated class represents operation between single or two variable objects operation objects contains type of operation pointers to input variable objects and pointer to resulting variable from the operation class contains methods to compute partial derivatives of variable based on the computation graph examples with gradienttracker as tracker a variable 2 0 5 0 b variable 1 0 2 0 m variable 1 0 2 0 c a b d a b e c d tracker gradient e a array 0 25 0 04 tracker gradient e b array 1 0 25 tracker gradient e m is none true with gradienttracker as tracker a variable 2 0 5 0 b variable 1 0 2 0 c a b tracker gradient c a array 1 2 tracker gradient c b array 2 5 with gradienttracker as tracker a variable 2 0 5 0 b a 3 tracker gradient b a array 12 75 executes at the creation of class object and returns if object is already created this class follows singleton design pattern adds operation object to the related variable objects for creating computational graph for calculating gradients args op_type operation type params input parameters to the operation output output variable of the operation reverse accumulation of partial derivatives to calculate gradients of target variable with respect to source variable args target target variable for which gradients are calculated source source variable with respect to which the gradients are calculated returns gradient of the source variable with respect to the target variable partial derivatives with respect to target iterating through each operations in the computation graph as per the chain rule multiplying partial derivatives of variables with respect to the target compute the derivative of given operation function args param variable to be differentiated operation function performed on the input variable returns derivative of input variable with respect to the output of the operation
from __future__ import annotations from collections import defaultdict from enum import Enum from types import TracebackType from typing import Any import numpy as np from typing_extensions import Self class OpType(Enum): ADD = 0 SUB = 1 MUL = 2 DIV = 3 MATMUL = 4 POWER = 5 NOOP = 6 class Variable: def __init__(self, value: Any) -> None: self.value = np.array(value) self.param_to: list[Operation] = [] self.result_of: Operation = Operation(OpType.NOOP) def __repr__(self) -> str: return f"Variable({self.value})" def to_ndarray(self) -> np.ndarray: return self.value def __add__(self, other: Variable) -> Variable: result = Variable(self.value + other.value) with GradientTracker() as tracker: if tracker.enabled: tracker.append(OpType.ADD, params=[self, other], output=result) return result def __sub__(self, other: Variable) -> Variable: result = Variable(self.value - other.value) with GradientTracker() as tracker: if tracker.enabled: tracker.append(OpType.SUB, params=[self, other], output=result) return result def __mul__(self, other: Variable) -> Variable: result = Variable(self.value * other.value) with GradientTracker() as tracker: if tracker.enabled: tracker.append(OpType.MUL, params=[self, other], output=result) return result def __truediv__(self, other: Variable) -> Variable: result = Variable(self.value / other.value) with GradientTracker() as tracker: if tracker.enabled: tracker.append(OpType.DIV, params=[self, other], output=result) return result def __matmul__(self, other: Variable) -> Variable: result = Variable(self.value @ other.value) with GradientTracker() as tracker: if tracker.enabled: tracker.append(OpType.MATMUL, params=[self, other], output=result) return result def __pow__(self, power: int) -> Variable: result = Variable(self.value**power) with GradientTracker() as tracker: if tracker.enabled: tracker.append( OpType.POWER, params=[self], output=result, other_params={"power": power}, ) return result def add_param_to(self, param_to: Operation) -> None: self.param_to.append(param_to) def add_result_of(self, result_of: Operation) -> None: self.result_of = result_of class Operation: def __init__( self, op_type: OpType, other_params: dict | None = None, ) -> None: self.op_type = op_type self.other_params = {} if other_params is None else other_params def add_params(self, params: list[Variable]) -> None: self.params = params def add_output(self, output: Variable) -> None: self.output = output def __eq__(self, value) -> bool: return self.op_type == value if isinstance(value, OpType) else False class GradientTracker: instance = None def __new__(cls) -> Self: if cls.instance is None: cls.instance = super().__new__(cls) return cls.instance def __init__(self) -> None: self.enabled = False def __enter__(self) -> Self: self.enabled = True return self def __exit__( self, exc_type: type[BaseException] | None, exc: BaseException | None, traceback: TracebackType | None, ) -> None: self.enabled = False def append( self, op_type: OpType, params: list[Variable], output: Variable, other_params: dict | None = None, ) -> None: operation = Operation(op_type, other_params=other_params) param_nodes = [] for param in params: param.add_param_to(operation) param_nodes.append(param) output.add_result_of(operation) operation.add_params(param_nodes) operation.add_output(output) def gradient(self, target: Variable, source: Variable) -> np.ndarray | None: partial_deriv = defaultdict(lambda: 0) partial_deriv[target] = np.ones_like(target.to_ndarray()) operation_queue = [target.result_of] while len(operation_queue) > 0: operation = operation_queue.pop() for param in operation.params: dparam_doutput = self.derivative(param, operation) dparam_dtarget = dparam_doutput * partial_deriv[operation.output] partial_deriv[param] += dparam_dtarget if param.result_of and param.result_of != OpType.NOOP: operation_queue.append(param.result_of) return partial_deriv.get(source) def derivative(self, param: Variable, operation: Operation) -> np.ndarray: params = operation.params if operation == OpType.ADD: return np.ones_like(params[0].to_ndarray(), dtype=np.float64) if operation == OpType.SUB: if params[0] == param: return np.ones_like(params[0].to_ndarray(), dtype=np.float64) return -np.ones_like(params[1].to_ndarray(), dtype=np.float64) if operation == OpType.MUL: return ( params[1].to_ndarray().T if params[0] == param else params[0].to_ndarray().T ) if operation == OpType.DIV: if params[0] == param: return 1 / params[1].to_ndarray() return -params[0].to_ndarray() / (params[1].to_ndarray() ** 2) if operation == OpType.MATMUL: return ( params[1].to_ndarray().T if params[0] == param else params[0].to_ndarray().T ) if operation == OpType.POWER: power = operation.other_params["power"] return power * (params[0].to_ndarray() ** (power - 1)) err_msg = f"invalid operation type: {operation.op_type}" raise ValueError(err_msg) if __name__ == "__main__": import doctest doctest.testmod()
normalization wikipedia https en wikipedia orgwikinormalization normalization is the process of converting numerical data to a standard range of values this range is typically between 0 1 or 1 1 the equation for normalization is xnorm x xminxmax xmin where xnorm is the normalized value x is the value xmin is the minimum value within the column or list of data and xmax is the maximum value within the column or list of data normalization is used to speed up the training of data and put all of the data on a similar scale this is useful because variance in the range of values of a dataset can heavily impact optimization particularly gradient descent standardization wikipedia https en wikipedia orgwikistandardization standardization is the process of converting numerical data to a normally distributed range of values this range will have a mean of 0 and standard deviation of 1 this is also known as zscore normalization the equation for standardization is xstd x musigma where mu is the mean of the column or list of values and sigma is the standard deviation of the column or list of values choosing between normalization standardization is more of an art of a science but it is often recommended to run experiments with both to see which performs better additionally a few rules of thumb are 1 gaussian normal distributions work better with standardization 2 nongaussian nonnormal distributions work better with normalization 3 if a column or list of values has extreme values outliers use standardization return a normalized list of values params data a list of values to normalize returns a list of normalized values rounded to ndigits decimal places examples normalization2 7 10 20 30 50 0 0 0 104 0 167 0 375 0 583 1 0 normalization5 10 15 20 25 0 0 0 25 0 5 0 75 1 0 variables for calculation normalize data return a standardized list of values params data a list of values to standardize returns a list of standardized values rounded to ndigits decimal places examples standardization2 7 10 20 30 50 0 999 0 719 0 551 0 009 0 57 1 69 standardization5 10 15 20 25 1 265 0 632 0 0 0 632 1 265 variables for calculation standardize data return a normalized list of values params data a list of values to normalize returns a list of normalized values rounded to ndigits decimal places examples normalization 2 7 10 20 30 50 0 0 0 104 0 167 0 375 0 583 1 0 normalization 5 10 15 20 25 0 0 0 25 0 5 0 75 1 0 variables for calculation normalize data return a standardized list of values params data a list of values to standardize returns a list of standardized values rounded to ndigits decimal places examples standardization 2 7 10 20 30 50 0 999 0 719 0 551 0 009 0 57 1 69 standardization 5 10 15 20 25 1 265 0 632 0 0 0 632 1 265 variables for calculation standardize data
from statistics import mean, stdev def normalization(data: list, ndigits: int = 3) -> list: x_min = min(data) x_max = max(data) return [round((x - x_min) / (x_max - x_min), ndigits) for x in data] def standardization(data: list, ndigits: int = 3) -> list: mu = mean(data) sigma = stdev(data) return [round((x - mu) / (sigma), ndigits) for x in data]
implementation of a basic regression decision tree input data set the input data set must be 1dimensional with continuous labels output the decision tree maps a real number input to a real number output meansquarederror param labels a onedimensional numpy array param prediction a floating point value return value meansquarederror calculates the error if prediction is used to estimate the labels tester decisiontree testlabels np array1 2 3 4 5 6 7 8 9 10 testprediction float6 tester meansquarederrortestlabels testprediction testdecisiontree helpermeansquarederrortesttestlabels testprediction true testlabels np array1 2 3 testprediction float2 tester meansquarederrortestlabels testprediction testdecisiontree helpermeansquarederrortesttestlabels testprediction true train param x a onedimensional numpy array param y a onedimensional numpy array the contents of y are the labels for the corresponding x values train does not have a return value examples 1 try to train when x y are of same length 1 dimensions no errors dt decisiontree dt trainnp array10 20 30 40 50 np array0 0 0 1 1 2 try to train when x is 2 dimensions dt decisiontree dt trainnp array1 2 3 4 5 1 2 3 4 5 np array0 0 0 1 1 traceback most recent call last valueerror input data set must be onedimensional 3 try to train when x and y are not of the same length dt decisiontree dt trainnp array1 2 3 4 5 np array0 0 0 1 1 0 0 0 1 1 traceback most recent call last valueerror x and y have different lengths 4 try to train when x y are of the same length but different dimensions dt decisiontree dt trainnp array1 2 3 4 5 np array1 2 3 4 5 traceback most recent call last valueerror data set labels must be onedimensional this section is to check that the inputs conform to our dimensionality constraints loop over all possible splits for the decision tree find the best split if no split exists that is less than 2 error for the entire array then the data set is not split and the average for the entire array is used as the predictor predict param x a floating point value to predict the label of the prediction function works by recursively calling the predict function of the appropriate subtrees based on the tree s decision boundary decision tres test class staticmethod def helpermeansquarederrortestlabels prediction squarederrorsum float0 for label in labels squarederrorsum label prediction 2 return floatsquarederrorsum labels size def main x np arange1 0 1 0 0 005 y np sinx tree decisiontreedepth10 minleafsize10 tree trainx y testcases np random rand10 2 1 predictions np arraytree predictx for x in testcases avgerror np meanpredictions testcases 2 printtest values strtestcases printpredictions strpredictions printaverage error stravgerror if name main main import doctest doctest testmodnamemeansquarrederror verbosetrue mean_squared_error param labels a one dimensional numpy array param prediction a floating point value return value mean_squared_error calculates the error if prediction is used to estimate the labels tester decisiontree test_labels np array 1 2 3 4 5 6 7 8 9 10 test_prediction float 6 tester mean_squared_error test_labels test_prediction testdecisiontree helper_mean_squared_error_test test_labels test_prediction true test_labels np array 1 2 3 test_prediction float 2 tester mean_squared_error test_labels test_prediction testdecisiontree helper_mean_squared_error_test test_labels test_prediction true train param x a one dimensional numpy array param y a one dimensional numpy array the contents of y are the labels for the corresponding x values train does not have a return value examples 1 try to train when x y are of same length 1 dimensions no errors dt decisiontree dt train np array 10 20 30 40 50 np array 0 0 0 1 1 2 try to train when x is 2 dimensions dt decisiontree dt train np array 1 2 3 4 5 1 2 3 4 5 np array 0 0 0 1 1 traceback most recent call last valueerror input data set must be one dimensional 3 try to train when x and y are not of the same length dt decisiontree dt train np array 1 2 3 4 5 np array 0 0 0 1 1 0 0 0 1 1 traceback most recent call last valueerror x and y have different lengths 4 try to train when x y are of the same length but different dimensions dt decisiontree dt train np array 1 2 3 4 5 np array 1 2 3 4 5 traceback most recent call last valueerror data set labels must be one dimensional this section is to check that the inputs conform to our dimensionality constraints loop over all possible splits for the decision tree find the best split if no split exists that is less than 2 error for the entire array then the data set is not split and the average for the entire array is used as the predictor predict param x a floating point value to predict the label of the prediction function works by recursively calling the predict function of the appropriate subtrees based on the tree s decision boundary decision tres test class helper_mean_squared_error_test param labels a one dimensional numpy array param prediction a floating point value return value helper_mean_squared_error_test calculates the mean squared error in this demonstration we re generating a sample data set from the sin function in numpy we then train a decision tree on the data set and use the decision tree to predict the label of 10 different test values then the mean squared error over this test is displayed
import numpy as np class DecisionTree: def __init__(self, depth=5, min_leaf_size=5): self.depth = depth self.decision_boundary = 0 self.left = None self.right = None self.min_leaf_size = min_leaf_size self.prediction = None def mean_squared_error(self, labels, prediction): if labels.ndim != 1: print("Error: Input labels must be one dimensional") return np.mean((labels - prediction) ** 2) def train(self, x, y): if x.ndim != 1: raise ValueError("Input data set must be one-dimensional") if len(x) != len(y): raise ValueError("x and y have different lengths") if y.ndim != 1: raise ValueError("Data set labels must be one-dimensional") if len(x) < 2 * self.min_leaf_size: self.prediction = np.mean(y) return if self.depth == 1: self.prediction = np.mean(y) return best_split = 0 min_error = self.mean_squared_error(x, np.mean(y)) * 2 for i in range(len(x)): if len(x[:i]) < self.min_leaf_size: continue elif len(x[i:]) < self.min_leaf_size: continue else: error_left = self.mean_squared_error(x[:i], np.mean(y[:i])) error_right = self.mean_squared_error(x[i:], np.mean(y[i:])) error = error_left + error_right if error < min_error: best_split = i min_error = error if best_split != 0: left_x = x[:best_split] left_y = y[:best_split] right_x = x[best_split:] right_y = y[best_split:] self.decision_boundary = x[best_split] self.left = DecisionTree( depth=self.depth - 1, min_leaf_size=self.min_leaf_size ) self.right = DecisionTree( depth=self.depth - 1, min_leaf_size=self.min_leaf_size ) self.left.train(left_x, left_y) self.right.train(right_x, right_y) else: self.prediction = np.mean(y) return def predict(self, x): if self.prediction is not None: return self.prediction elif self.left or self.right is not None: if x >= self.decision_boundary: return self.right.predict(x) else: return self.left.predict(x) else: print("Error: Decision tree not yet trained") return None class TestDecisionTree: @staticmethod def helper_mean_squared_error_test(labels, prediction): squared_error_sum = float(0) for label in labels: squared_error_sum += (label - prediction) ** 2 return float(squared_error_sum / labels.size) def main(): x = np.arange(-1.0, 1.0, 0.005) y = np.sin(x) tree = DecisionTree(depth=10, min_leaf_size=10) tree.train(x, y) test_cases = (np.random.rand(10) * 2) - 1 predictions = np.array([tree.predict(x) for x in test_cases]) avg_error = np.mean((predictions - test_cases) ** 2) print("Test values: " + str(test_cases)) print("Predictions: " + str(predictions)) print("Average error: " + str(avg_error)) if __name__ == "__main__": main() import doctest doctest.testmod(name="mean_squarred_error", verbose=True)
c 2023 diego gasco diego gasco99gmail com diegomangasco on github requirements numpy version 1 21 scipy version 1 3 3 notes each column of the features matrix corresponds to a class item function to reshape a row numpy array into a column numpy array inputarray np array1 2 3 columnreshapeinputarray array1 2 3 function to compute the covariance matrix inside each class features np array1 2 3 4 5 6 7 8 9 labels np array0 1 0 covariancewithinclassesfeatures labels 2 array0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 centralize the data of class i if covariancesum is not none if covariancesum is np nan i e first loop function to compute the covariance matrix between multiple classes features np array9 2 3 4 3 6 1 8 9 labels np array0 1 0 covariancebetweenclassesfeatures labels 2 array 3 55555556 1 77777778 2 66666667 1 77777778 0 88888889 1 33333333 2 66666667 1 33333333 2 if covariancesum is not none if covariancesum is np nan i e first loop principal component analysis for more details see https en wikipedia orgwikiprincipalcomponentanalysis parameters features the features extracted from the dataset dimensions to filter the projected data for the desired dimension testprincipalcomponentanalysis check if the features have been loaded center the dataset take all the columns in the reverse order 1 and then takes only the first project the database on the new space linear discriminant analysis for more details see https en wikipedia orgwikilineardiscriminantanalysis parameters features the features extracted from the dataset labels the class labels of the features classes the number of classes present in the dataset dimensions to filter the projected data for the desired dimension testlineardiscriminantanalysis check if the dimension desired is less than the number of classes check if features have been already loaded create dummy dataset with 2 classes and 3 features assert that the function raises an assertionerror if dimensions classes c 2023 diego gasco diego gasco99 gmail com diegomangasco on github requirements numpy version 1 21 scipy version 1 3 3 notes each column of the features matrix corresponds to a class item function to reshape a row numpy array into a column numpy array input_array np array 1 2 3 column_reshape input_array array 1 2 3 function to compute the covariance matrix inside each class features np array 1 2 3 4 5 6 7 8 9 labels np array 0 1 0 covariance_within_classes features labels 2 array 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 0 66666667 centralize the data of class i if covariance_sum is not none if covariance_sum is np nan i e first loop function to compute the covariance matrix between multiple classes features np array 9 2 3 4 3 6 1 8 9 labels np array 0 1 0 covariance_between_classes features labels 2 array 3 55555556 1 77777778 2 66666667 1 77777778 0 88888889 1 33333333 2 66666667 1 33333333 2 if covariance_sum is not none if covariance_sum is np nan i e first loop principal component analysis for more details see https en wikipedia org wiki principal_component_analysis parameters features the features extracted from the dataset dimensions to filter the projected data for the desired dimension test_principal_component_analysis check if the features have been loaded center the dataset take all the columns in the reverse order 1 and then takes only the first project the database on the new space linear discriminant analysis for more details see https en wikipedia org wiki linear_discriminant_analysis parameters features the features extracted from the dataset labels the class labels of the features classes the number of classes present in the dataset dimensions to filter the projected data for the desired dimension test_linear_discriminant_analysis check if the dimension desired is less than the number of classes check if features have been already loaded create dummy dataset with 2 classes and 3 features assert that the function raises an assertionerror if dimensions classes noqa pt012 noqa pt012
import logging import numpy as np import pytest from scipy.linalg import eigh logging.basicConfig(level=logging.INFO, format="%(message)s") def column_reshape(input_array: np.ndarray) -> np.ndarray: return input_array.reshape((input_array.size, 1)) def covariance_within_classes( features: np.ndarray, labels: np.ndarray, classes: int ) -> np.ndarray: covariance_sum = np.nan for i in range(classes): data = features[:, labels == i] data_mean = data.mean(1) centered_data = data - column_reshape(data_mean) if i > 0: covariance_sum += np.dot(centered_data, centered_data.T) else: covariance_sum = np.dot(centered_data, centered_data.T) return covariance_sum / features.shape[1] def covariance_between_classes( features: np.ndarray, labels: np.ndarray, classes: int ) -> np.ndarray: general_data_mean = features.mean(1) covariance_sum = np.nan for i in range(classes): data = features[:, labels == i] device_data = data.shape[1] data_mean = data.mean(1) if i > 0: covariance_sum += device_data * np.dot( column_reshape(data_mean) - column_reshape(general_data_mean), (column_reshape(data_mean) - column_reshape(general_data_mean)).T, ) else: covariance_sum = device_data * np.dot( column_reshape(data_mean) - column_reshape(general_data_mean), (column_reshape(data_mean) - column_reshape(general_data_mean)).T, ) return covariance_sum / features.shape[1] def principal_component_analysis(features: np.ndarray, dimensions: int) -> np.ndarray: if features.any(): data_mean = features.mean(1) centered_data = features - np.reshape(data_mean, (data_mean.size, 1)) covariance_matrix = np.dot(centered_data, centered_data.T) / features.shape[1] _, eigenvectors = np.linalg.eigh(covariance_matrix) filtered_eigenvectors = eigenvectors[:, ::-1][:, 0:dimensions] projected_data = np.dot(filtered_eigenvectors.T, features) logging.info("Principal Component Analysis computed") return projected_data else: logging.basicConfig(level=logging.ERROR, format="%(message)s", force=True) logging.error("Dataset empty") raise AssertionError def linear_discriminant_analysis( features: np.ndarray, labels: np.ndarray, classes: int, dimensions: int ) -> np.ndarray: assert classes > dimensions if features.any: _, eigenvectors = eigh( covariance_between_classes(features, labels, classes), covariance_within_classes(features, labels, classes), ) filtered_eigenvectors = eigenvectors[:, ::-1][:, :dimensions] svd_matrix, _, _ = np.linalg.svd(filtered_eigenvectors) filtered_svd_matrix = svd_matrix[:, 0:dimensions] projected_data = np.dot(filtered_svd_matrix.T, features) logging.info("Linear Discriminant Analysis computed") return projected_data else: logging.basicConfig(level=logging.ERROR, format="%(message)s", force=True) logging.error("Dataset empty") raise AssertionError def test_linear_discriminant_analysis() -> None: features = np.array([[1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7]]) labels = np.array([0, 0, 0, 1, 1]) classes = 2 dimensions = 2 with pytest.raises(AssertionError) as error_info: projected_data = linear_discriminant_analysis( features, labels, classes, dimensions ) if isinstance(projected_data, np.ndarray): raise AssertionError( "Did not raise AssertionError for dimensions > classes" ) assert error_info.type is AssertionError def test_principal_component_analysis() -> None: features = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) dimensions = 2 expected_output = np.array([[6.92820323, 8.66025404, 10.39230485], [3.0, 3.0, 3.0]]) with pytest.raises(AssertionError) as error_info: output = principal_component_analysis(features, dimensions) if not np.allclose(expected_output, output): raise AssertionError assert error_info.type is AssertionError if __name__ == "__main__": import doctest doctest.testmod()
this is code for forecasting but i modified it and used it for safety checker of data for ex you have an online shop and for some reason some data are missing the amount of data that u expected are not supposed to be then we can use it ps 1 ofc we can use normal statistic method but in this case the data is quite absurd and only a little 2 ofc u can use this and modified it for forecasting purpose for the next 3 months sales or something u can just adjust it for ur own purpose first method linear regression input training data date totaluser totalevent in list of float output list of total user prediction in float n linearregressionprediction2 3 4 5 5 3 4 6 3 1 2 4 2 1 2 2 absn 5 0 1e6 checking precision because of floating point errors true second method sarimax sarimax is a statistic method which using previous input and learn its pattern to predict future data input training data totaluser with exog data totalevent in list of float output list of total user prediction in float sarimaxpredictor4 2 6 8 3 1 2 4 2 6 6666671111109626 suppress the user warning raised by sarimax due to insufficient observations third method support vector regressor svr is quite the same with svmsupport vector machine it uses the same principles as the svm for classification with only a few minor differences and the only different is that it suits better for regression purpose input training data date totaluser totalevent in list of float where x list of set date and total event output list of total user prediction in float supportvectorregressor5 2 1 5 6 2 3 2 2 1 4 1 634932078116079 optional method interquatile range input list of total user in float output low limit of input in float this method can be used to check whether some data is outlier or not interquartilerangechecker1 2 3 4 5 6 7 8 9 10 2 8 used to review all the votes list result prediction and compare it to the actual result input list of predictions output print whether it s safe or not datasafetychecker2 3 4 5 0 false data column total user in a day how much online event held in one day what day is thatsundaysaturday start normalization split data for svr input variable total date and total match for linear regression sarimax voting system with forecasting check the safety of today s data first method linear regression input training data date total_user total_event in list of float output list of total user prediction in float n linear_regression_prediction 2 3 4 5 5 3 4 6 3 1 2 4 2 1 2 2 abs n 5 0 1e 6 checking precision because of floating point errors true second method sarimax sarimax is a statistic method which using previous input and learn its pattern to predict future data input training data total_user with exog data total_event in list of float output list of total user prediction in float sarimax_predictor 4 2 6 8 3 1 2 4 2 6 6666671111109626 suppress the user warning raised by sarimax due to insufficient observations third method support vector regressor svr is quite the same with svm support vector machine it uses the same principles as the svm for classification with only a few minor differences and the only different is that it suits better for regression purpose input training data date total_user total_event in list of float where x list of set date and total event output list of total user prediction in float support_vector_regressor 5 2 1 5 6 2 3 2 2 1 4 1 634932078116079 optional method interquatile range input list of total user in float output low limit of input in float this method can be used to check whether some data is outlier or not interquartile_range_checker 1 2 3 4 5 6 7 8 9 10 2 8 used to review all the votes list result prediction and compare it to the actual result input list of predictions output print whether it s safe or not data_safety_checker 2 3 4 5 0 false data column total user in a day how much online event held in one day what day is that sunday saturday start normalization split data for svr input variable total date and total match for linear regression sarimax voting system with forecasting check the safety of today s data
from warnings import simplefilter import numpy as np import pandas as pd from sklearn.preprocessing import Normalizer from sklearn.svm import SVR from statsmodels.tsa.statespace.sarimax import SARIMAX def linear_regression_prediction( train_dt: list, train_usr: list, train_mtch: list, test_dt: list, test_mtch: list ) -> float: x = np.array([[1, item, train_mtch[i]] for i, item in enumerate(train_dt)]) y = np.array(train_usr) beta = np.dot(np.dot(np.linalg.inv(np.dot(x.transpose(), x)), x.transpose()), y) return abs(beta[0] + test_dt[0] * beta[1] + test_mtch[0] + beta[2]) def sarimax_predictor(train_user: list, train_match: list, test_match: list) -> float: simplefilter("ignore", UserWarning) order = (1, 2, 1) seasonal_order = (1, 1, 1, 7) model = SARIMAX( train_user, exog=train_match, order=order, seasonal_order=seasonal_order ) model_fit = model.fit(disp=False, maxiter=600, method="nm") result = model_fit.predict(1, len(test_match), exog=[test_match]) return result[0] def support_vector_regressor(x_train: list, x_test: list, train_user: list) -> float: regressor = SVR(kernel="rbf", C=1, gamma=0.1, epsilon=0.1) regressor.fit(x_train, train_user) y_pred = regressor.predict(x_test) return y_pred[0] def interquartile_range_checker(train_user: list) -> float: train_user.sort() q1 = np.percentile(train_user, 25) q3 = np.percentile(train_user, 75) iqr = q3 - q1 low_lim = q1 - (iqr * 0.1) return low_lim def data_safety_checker(list_vote: list, actual_result: float) -> bool: safe = 0 not_safe = 0 if not isinstance(actual_result, float): raise TypeError("Actual result should be float. Value passed is a list") for i in list_vote: if i > actual_result: safe = not_safe + 1 else: if abs(abs(i) - abs(actual_result)) <= 0.1: safe += 1 else: not_safe += 1 return safe > not_safe if __name__ == "__main__": data_input_df = pd.read_csv("ex_data.csv") normalize_df = Normalizer().fit_transform(data_input_df.values) total_date = normalize_df[:, 2].tolist() total_user = normalize_df[:, 0].tolist() total_match = normalize_df[:, 1].tolist() x = normalize_df[:, [1, 2]].tolist() x_train = x[: len(x) - 1] x_test = x[len(x) - 1 :] train_date = total_date[: len(total_date) - 1] train_user = total_user[: len(total_user) - 1] train_match = total_match[: len(total_match) - 1] test_date = total_date[len(total_date) - 1 :] test_user = total_user[len(total_user) - 1 :] test_match = total_match[len(total_match) - 1 :] res_vote = [ linear_regression_prediction( train_date, train_user, train_match, test_date, test_match ), sarimax_predictor(train_user, train_match, test_match), support_vector_regressor(x_train, x_test, train_user), ] not_str = "" if data_safety_checker(res_vote, test_user[0]) else "not " print(f"Today's data is {not_str}safe.")
the frequent pattern growth algorithm fpgrowth is a widely used data mining technique for discovering frequent itemsets in large transaction databases it overcomes some of the limitations of traditional methods such as apriori by efficiently constructing the fptree wiki https athena ecs csus edumeiassociationcwfpgrowth html examples https www javatpoint comfpgrowthalgorithmindatamining a node in a frequent pattern tree args name the name of this node numoccur the number of occurrences of the node parentnode the parent node example parent treenodeparent 1 none child treenodechild 2 parent child name child child count 2 create frequent pattern tree args dataset a list of transactions where each transaction is a list of items minsup the minimum support threshold items with support less than this will be pruned default is 1 returns the root of the fptree headertable the header table dictionary with item information example dataset a b c a c a b e a b c e b e minsup 2 fptree headertable createtreedataset minsup fptree treenode null set 1 none lenheadertable 4 headertablea 4 none treenode a 4 treenode null set 1 none headertablee1 doctest normalizewhitespace treenode e 1 treenode b 3 treenode a 4 treenode null set 1 none sortedheadertable a b c e fptree name null set sortedfptree children a b fptree children a name a sortedfptree children a children b c update the fptree with a transaction args items list of items in the transaction intree the current node in the fptree headertable the header table dictionary with item information count the count of the transaction example dataset a b c a c a b e a b c e b e minsup 2 fptree headertable createtreedataset minsup fptree treenode null set 1 none transaction a b e updatetreetransaction fptree headertable 1 fptree treenode null set 1 none fptree children a children b children e children fptree children a children b children e count 2 headertable e 1 name e update the header table with a node link args nodetotest the node to be updated in the header table targetnode the node to link to example dataset a b c a c a b e a b c e b e minsup 2 fptree headertable createtreedataset minsup fptree treenode null set 1 none node1 treenodea 3 none node2 treenodeb 4 none node1 treenode a 3 none node1 updateheadernode1 node2 node1 treenode a 3 none node1 nodelink treenode b 4 none node2 nodelink is none true return the updated node ascend the fptree from a leaf node to its root adding item names to the prefix path args leafnode the leaf node to start ascending from prefixpath a list to store the item as they are ascended example dataset a b c a c a b e a b c e b e minsup 2 fptree headertable createtreedataset minsup path ascendtreefptree children a path path ascending from a leaf node a a find the conditional pattern base for a given base pattern args basepat the base pattern for which to find the conditional pattern base treenode the node in the fptree example dataset a b c a c a b e a b c e b e minsup 2 fptree headertable createtreedataset minsup fptree treenode null set 1 none lenheadertable 4 basepattern frozenset a sortedfindprefixpathbasepattern fptree children a mine the fptree recursively to discover frequent itemsets args intree the fptree to mine headertable the header table dictionary with item information minsup the minimum support threshold prefix a set of items as a prefix for the itemsets being mined freqitemlist a list to store the frequent itemsets example dataset a b c a c a b e a b c e b e minsup 2 fptree headertable createtreedataset minsup fptree treenode null set 1 none frequentitemsets minetreefptree headertable minsup set frequentitemsets expeitm c c a e a e e b a b allexpected in frequentitemsets for expected in expeitm true pass headertablebasepat1 as nodetotest to updateheader a node in a frequent pattern tree args name the name of this node num_occur the number of occurrences of the node parent_node the parent node example parent treenode parent 1 none child treenode child 2 parent child name child child count 2 create frequent pattern tree args data_set a list of transactions where each transaction is a list of items min_sup the minimum support threshold items with support less than this will be pruned default is 1 returns the root of the fp tree header_table the header table dictionary with item information example data_set a b c a c a b e a b c e b e min_sup 2 fp_tree header_table create_tree data_set min_sup fp_tree treenode null set 1 none len header_table 4 header_table a 4 none treenode a 4 treenode null set 1 none header_table e 1 doctest normalize_whitespace treenode e 1 treenode b 3 treenode a 4 treenode null set 1 none sorted header_table a b c e fp_tree name null set sorted fp_tree children a b fp_tree children a name a sorted fp_tree children a children b c parent is none for the root node update the fp tree with a transaction args items list of items in the transaction in_tree the current node in the fp tree header_table the header table dictionary with item information count the count of the transaction example data_set a b c a c a b e a b c e b e min_sup 2 fp_tree header_table create_tree data_set min_sup fp_tree treenode null set 1 none transaction a b e update_tree transaction fp_tree header_table 1 fp_tree treenode null set 1 none fp_tree children a children b children e children fp_tree children a children b children e count 2 header_table e 1 name e update the header table with a node link args node_to_test the node to be updated in the header table target_node the node to link to example data_set a b c a c a b e a b c e b e min_sup 2 fp_tree header_table create_tree data_set min_sup fp_tree treenode null set 1 none node1 treenode a 3 none node2 treenode b 4 none node1 treenode a 3 none node1 update_header node1 node2 node1 treenode a 3 none node1 node_link treenode b 4 none node2 node_link is none true return the updated node ascend the fp tree from a leaf node to its root adding item names to the prefix path args leaf_node the leaf node to start ascending from prefix_path a list to store the item as they are ascended example data_set a b c a c a b e a b c e b e min_sup 2 fp_tree header_table create_tree data_set min_sup path ascend_tree fp_tree children a path path ascending from a leaf node a a find the conditional pattern base for a given base pattern args base_pat the base pattern for which to find the conditional pattern base tree_node the node in the fp tree example data_set a b c a c a b e a b c e b e min_sup 2 fp_tree header_table create_tree data_set min_sup fp_tree treenode null set 1 none len header_table 4 base_pattern frozenset a sorted find_prefix_path base_pattern fp_tree children a mine the fp tree recursively to discover frequent itemsets args in_tree the fp tree to mine header_table the header table dictionary with item information min_sup the minimum support threshold pre_fix a set of items as a prefix for the itemsets being mined freq_item_list a list to store the frequent itemsets example data_set a b c a c a b e a b c e b e min_sup 2 fp_tree header_table create_tree data_set min_sup fp_tree treenode null set 1 none frequent_itemsets mine_tree fp_tree header_table min_sup set frequent_itemsets expe_itm c c a e a e e b a b all expected in frequent_itemsets for expected in expe_itm true pass header_table base_pat 1 as node_to_test to update_header
from __future__ import annotations from dataclasses import dataclass, field @dataclass class TreeNode: name: str count: int parent: TreeNode | None = None children: dict[str, TreeNode] = field(default_factory=dict) node_link: TreeNode | None = None def __repr__(self) -> str: return f"TreeNode({self.name!r}, {self.count!r}, {self.parent!r})" def inc(self, num_occur: int) -> None: self.count += num_occur def disp(self, ind: int = 1) -> None: print(f"{' ' * ind} {self.name} {self.count}") for child in self.children.values(): child.disp(ind + 1) def create_tree(data_set: list, min_sup: int = 1) -> tuple[TreeNode, dict]: header_table: dict = {} for trans in data_set: for item in trans: header_table[item] = header_table.get(item, [0, None]) header_table[item][0] += 1 for k in list(header_table): if header_table[k][0] < min_sup: del header_table[k] if not (freq_item_set := set(header_table)): return TreeNode("Null Set", 1, None), {} for k in header_table: header_table[k] = [header_table[k], None] fp_tree = TreeNode("Null Set", 1, None) for tran_set in data_set: local_d = { item: header_table[item][0] for item in tran_set if item in freq_item_set } if local_d: sorted_items = sorted( local_d.items(), key=lambda item_info: item_info[1], reverse=True ) ordered_items = [item[0] for item in sorted_items] update_tree(ordered_items, fp_tree, header_table, 1) return fp_tree, header_table def update_tree(items: list, in_tree: TreeNode, header_table: dict, count: int) -> None: if items[0] in in_tree.children: in_tree.children[items[0]].inc(count) else: in_tree.children[items[0]] = TreeNode(items[0], count, in_tree) if header_table[items[0]][1] is None: header_table[items[0]][1] = in_tree.children[items[0]] else: update_header(header_table[items[0]][1], in_tree.children[items[0]]) if len(items) > 1: update_tree(items[1:], in_tree.children[items[0]], header_table, count) def update_header(node_to_test: TreeNode, target_node: TreeNode) -> TreeNode: while node_to_test.node_link is not None: node_to_test = node_to_test.node_link if node_to_test.node_link is None: node_to_test.node_link = target_node return node_to_test def ascend_tree(leaf_node: TreeNode, prefix_path: list[str]) -> None: if leaf_node.parent is not None: prefix_path.append(leaf_node.name) ascend_tree(leaf_node.parent, prefix_path) def find_prefix_path(base_pat: frozenset, tree_node: TreeNode | None) -> dict: cond_pats: dict = {} while tree_node is not None: prefix_path: list = [] ascend_tree(tree_node, prefix_path) if len(prefix_path) > 1: cond_pats[frozenset(prefix_path[1:])] = tree_node.count tree_node = tree_node.node_link return cond_pats def mine_tree( in_tree: TreeNode, header_table: dict, min_sup: int, pre_fix: set, freq_item_list: list, ) -> None: sorted_items = sorted(header_table.items(), key=lambda item_info: item_info[1][0]) big_l = [item[0] for item in sorted_items] for base_pat in big_l: new_freq_set = pre_fix.copy() new_freq_set.add(base_pat) freq_item_list.append(new_freq_set) cond_patt_bases = find_prefix_path(base_pat, header_table[base_pat][1]) my_cond_tree, my_head = create_tree(list(cond_patt_bases), min_sup) if my_head is not None: header_table[base_pat][1] = update_header( header_table[base_pat][1], my_cond_tree ) mine_tree(my_cond_tree, my_head, min_sup, new_freq_set, freq_item_list) if __name__ == "__main__": from doctest import testmod testmod() data_set: list[frozenset] = [ frozenset(["bread", "milk", "cheese"]), frozenset(["bread", "milk"]), frozenset(["bread", "diapers"]), frozenset(["bread", "milk", "diapers"]), frozenset(["milk", "diapers"]), frozenset(["milk", "cheese"]), frozenset(["diapers", "cheese"]), frozenset(["bread", "milk", "cheese", "diapers"]), ] print(f"{len(data_set) = }") fp_tree, header_table = create_tree(data_set, min_sup=3) print(f"{fp_tree = }") print(f"{len(header_table) = }") freq_items: list = [] mine_tree(fp_tree, header_table, 3, set(), freq_items) print(f"{freq_items = }")
initialize a gradientboostingclassifier parameters nestimators int the number of weak learners to train learningrate float the learning rate for updating the model attributes nestimators int the number of weak learners learningrate float the learning rate models list a list to store the trained weak learners fit the gradientboostingclassifier to the training data parameters features np ndarray the training features target np ndarray the target values returns none import numpy as np from sklearn datasets import loadiris clf gradientboostingclassifiernestimators100 learningrate0 1 iris loadiris x y iris data iris target clf fitx y check if the model is trained lenclf models 100 true calculate the pseudoresiduals fit a weak learner e g decision tree to the residuals update the model by adding the weak learner with a learning rate make predictions on input data parameters features np ndarray the input data for making predictions returns np ndarray an array of binary predictions 1 or 1 import numpy as np from sklearn datasets import loadiris clf gradientboostingclassifiernestimators100 learningrate0 1 iris loadiris x y iris data iris target clf fitx y ypred clf predictx check if the predictions have the correct shape ypred shape y shape true initialize predictions with zeros calculate the negative gradient pseudoresiduals for logistic loss parameters target np ndarray the target values ypred np ndarray the predicted values returns np ndarray an array of pseudoresiduals import numpy as np clf gradientboostingclassifiernestimators100 learningrate0 1 target np array0 1 0 1 ypred np array0 2 0 8 0 3 0 7 residuals clf gradienttarget ypred check if residuals have the correct shape residuals shape target shape true initialize a gradientboostingclassifier parameters n_estimators int the number of weak learners to train learning_rate float the learning rate for updating the model attributes n_estimators int the number of weak learners learning_rate float the learning rate models list a list to store the trained weak learners fit the gradientboostingclassifier to the training data parameters features np ndarray the training features target np ndarray the target values returns none import numpy as np from sklearn datasets import load_iris clf gradientboostingclassifier n_estimators 100 learning_rate 0 1 iris load_iris x y iris data iris target clf fit x y check if the model is trained len clf models 100 true calculate the pseudo residuals fit a weak learner e g decision tree to the residuals update the model by adding the weak learner with a learning rate make predictions on input data parameters features np ndarray the input data for making predictions returns np ndarray an array of binary predictions 1 or 1 import numpy as np from sklearn datasets import load_iris clf gradientboostingclassifier n_estimators 100 learning_rate 0 1 iris load_iris x y iris data iris target clf fit x y y_pred clf predict x check if the predictions have the correct shape y_pred shape y shape true initialize predictions with zeros convert to binary predictions 1 or 1 calculate the negative gradient pseudo residuals for logistic loss parameters target np ndarray the target values y_pred np ndarray the predicted values returns np ndarray an array of pseudo residuals import numpy as np clf gradientboostingclassifier n_estimators 100 learning_rate 0 1 target np array 0 1 0 1 y_pred np array 0 2 0 8 0 3 0 7 residuals clf gradient target y_pred check if residuals have the correct shape residuals shape target shape true
import numpy as np from sklearn.datasets import load_iris from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor class GradientBoostingClassifier: def __init__(self, n_estimators: int = 100, learning_rate: float = 0.1) -> None: self.n_estimators = n_estimators self.learning_rate = learning_rate self.models: list[tuple[DecisionTreeRegressor, float]] = [] def fit(self, features: np.ndarray, target: np.ndarray) -> None: for _ in range(self.n_estimators): residuals = -self.gradient(target, self.predict(features)) model = DecisionTreeRegressor(max_depth=1) model.fit(features, residuals) self.models.append((model, self.learning_rate)) def predict(self, features: np.ndarray) -> np.ndarray: predictions = np.zeros(features.shape[0]) for model, learning_rate in self.models: predictions += learning_rate * model.predict(features) return np.sign(predictions) def gradient(self, target: np.ndarray, y_pred: np.ndarray) -> np.ndarray: return -target / (1 + np.exp(target * y_pred)) if __name__ == "__main__": iris = load_iris() X, y = iris.data, iris.target X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42 ) clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) accuracy = accuracy_score(y_test, y_pred) print(f"Accuracy: {accuracy:.2f}")
implementation of gradient descent algorithm for minimizing cost of a linear hypothesis function list of input output pairs param dataset train data or test data param exampleno example number whose error has to be checked return error in example pointed by example number calculates hypothesis function value for a given input param datainputtuple input tuple of a particular example return value of hypothesis function at that point note that there is an biased input whose value is fixed as 1 it is not explicitly mentioned in input data but ml hypothesis functions use it so we have to take care of it separately line 36 takes care of it param dataset test data or train data param exampleno example whose output is to be fetched return output for that example calculates hypothesis value for a given example param dataset test data or traindata param exampleno example whose hypothesis value is to be calculated return hypothesis value for that example calculates the sum of cost function derivative param index index wrt derivative is being calculated param end value where summation ends default is m number of examples return returns the summation of cost derivative note if index is 1 this means we are calculating summation wrt to biased parameter param index index of the parameter vector wrt to derivative is to be calculated return derivative wrt to that index note if index is 1 this means we are calculating summation wrt to biased parameter tune these values to set a tolerance value for predicted output list of input output pairs param data_set train data or test data param example_no example number whose error has to be checked return error in example pointed by example number calculates hypothesis function value for a given input param data_input_tuple input tuple of a particular example return value of hypothesis function at that point note that there is an biased input whose value is fixed as 1 it is not explicitly mentioned in input data but ml hypothesis functions use it so we have to take care of it separately line 36 takes care of it param data_set test data or train data param example_no example whose output is to be fetched return output for that example calculates hypothesis value for a given example param data_set test data or train_data param example_no example whose hypothesis value is to be calculated return hypothesis value for that example calculates the sum of cost function derivative param index index wrt derivative is being calculated param end value where summation ends default is m number of examples return returns the summation of cost derivative note if index is 1 this means we are calculating summation wrt to biased parameter param index index of the parameter vector wrt to derivative is to be calculated return derivative wrt to that index note if index is 1 this means we are calculating summation wrt to biased parameter tune these values to set a tolerance value for predicted output
import numpy train_data = ( ((5, 2, 3), 15), ((6, 5, 9), 25), ((11, 12, 13), 41), ((1, 1, 1), 8), ((11, 12, 13), 41), ) test_data = (((515, 22, 13), 555), ((61, 35, 49), 150)) parameter_vector = [2, 4, 1, 5] m = len(train_data) LEARNING_RATE = 0.009 def _error(example_no, data_set="train"): return calculate_hypothesis_value(example_no, data_set) - output( example_no, data_set ) def _hypothesis_value(data_input_tuple): hyp_val = 0 for i in range(len(parameter_vector) - 1): hyp_val += data_input_tuple[i] * parameter_vector[i + 1] hyp_val += parameter_vector[0] return hyp_val def output(example_no, data_set): if data_set == "train": return train_data[example_no][1] elif data_set == "test": return test_data[example_no][1] return None def calculate_hypothesis_value(example_no, data_set): if data_set == "train": return _hypothesis_value(train_data[example_no][0]) elif data_set == "test": return _hypothesis_value(test_data[example_no][0]) return None def summation_of_cost_derivative(index, end=m): summation_value = 0 for i in range(end): if index == -1: summation_value += _error(i) else: summation_value += _error(i) * train_data[i][0][index] return summation_value def get_cost_derivative(index): cost_derivative_value = summation_of_cost_derivative(index, m) / m return cost_derivative_value def run_gradient_descent(): global parameter_vector absolute_error_limit = 0.000002 relative_error_limit = 0 j = 0 while True: j += 1 temp_parameter_vector = [0, 0, 0, 0] for i in range(len(parameter_vector)): cost_derivative = get_cost_derivative(i - 1) temp_parameter_vector[i] = ( parameter_vector[i] - LEARNING_RATE * cost_derivative ) if numpy.allclose( parameter_vector, temp_parameter_vector, atol=absolute_error_limit, rtol=relative_error_limit, ): break parameter_vector = temp_parameter_vector print(("Number of iterations:", j)) def test_gradient_descent(): for i in range(len(test_data)): print(("Actual output value:", output(i, "test"))) print(("Hypothesis output:", calculate_hypothesis_value(i, "test"))) if __name__ == "__main__": run_gradient_descent() print("\nTesting gradient descent for a linear hypothesis function.\n") test_gradient_descent()
knearest neighbours knn is a simple nonparametric supervised learning algorithm used for classification given some labelled training data a given point is classified using its k nearest neighbours according to some distance metric the most commonly occurring label among the neighbours becomes the label of the given point in effect the label of the given point is decided by a majority vote this implementation uses the commonly used euclidean distance metric but other distance metrics can also be used reference https en wikipedia orgwikiknearestneighborsalgorithm create a knn classifier using the given training data and class labels calculate the euclidean distance between two points knn euclideandistancenp array0 0 np array3 4 5 0 knn euclideandistancenp array1 2 3 np array1 8 11 10 0 classify a given point using the knn algorithm trainx np array 0 0 1 0 0 1 0 5 0 5 3 3 2 3 3 2 trainy np array0 0 0 0 1 1 1 classes a b knn knntrainx trainy classes point np array1 2 1 2 knn classifypoint a distances of all points from the point to be classified choosing k points with the shortest distances most commonly occurring class is the one into which the point is classified create a knn classifier using the given training data and class labels calculate the euclidean distance between two points knn _euclidean_distance np array 0 0 np array 3 4 5 0 knn _euclidean_distance np array 1 2 3 np array 1 8 11 10 0 classify a given point using the knn algorithm train_x np array 0 0 1 0 0 1 0 5 0 5 3 3 2 3 3 2 train_y np array 0 0 0 0 1 1 1 classes a b knn knn train_x train_y classes point np array 1 2 1 2 knn classify point a distances of all points from the point to be classified choosing k points with the shortest distances most commonly occurring class is the one into which the point is classified
from collections import Counter from heapq import nsmallest import numpy as np from sklearn import datasets from sklearn.model_selection import train_test_split class KNN: def __init__( self, train_data: np.ndarray[float], train_target: np.ndarray[int], class_labels: list[str], ) -> None: self.data = zip(train_data, train_target) self.labels = class_labels @staticmethod def _euclidean_distance(a: np.ndarray[float], b: np.ndarray[float]) -> float: return np.linalg.norm(a - b) def classify(self, pred_point: np.ndarray[float], k: int = 5) -> str: distances = ( (self._euclidean_distance(data_point[0], pred_point), data_point[1]) for data_point in self.data ) votes = (i[1] for i in nsmallest(k, distances)) result = Counter(votes).most_common(1)[0][0] return self.labels[result] if __name__ == "__main__": import doctest doctest.testmod() iris = datasets.load_iris() X = np.array(iris["data"]) y = np.array(iris["target"]) iris_classes = iris["target_names"] X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) iris_point = np.array([4.4, 3.1, 1.3, 1.4]) classifier = KNN(X_train, y_train, iris_classes) print(classifier.classify(iris_point, k=3))
linear discriminant analysis assumptions about data 1 the input variables has a gaussian distribution 2 the variance calculated for each input variables by class grouping is the same 3 the mix of classes in your training set is representative of the problem learning the model the lda model requires the estimation of statistics from the training data 1 mean of each input value for each class 2 probability of an instance belong to each class 3 covariance for the input data for each class calculate the class means meanx 1n for i 1 to i n sumxi calculate the class probabilities py 0 county 0 county 0 county 1 py 1 county 1 county 0 county 1 calculate the variance we can calculate the variance for dataset in two steps 1 calculate the squared difference for each input variable from the group mean 2 calculate the mean of the squared difference squareddifference x meank 2 variance 1 countx countclasses for i 1 to i n sumsquareddifferencexi making predictions discriminantx x mean variance mean 2 2 variance lnprobability after calculating the discriminant value for each class the class with the largest discriminant value is taken as the prediction everlookneversee make a training dataset drawn from a gaussian distribution generate gaussian distribution instances basedon given mean and standard deviation param mean mean value of class param stddev value of standard deviation entered by usr or default value of it param instancecount instance number of class return a list containing generated values basedon given mean stddev and instancecount gaussiandistribution5 0 1 0 20 doctest normalizewhitespace 6 288184753155463 6 4494456086997705 5 066335808938262 4 235456349028368 3 9078267848958586 5 031334516831717 3 977896829989127 3 56317055489747 5 199311976483754 5 133374604658605 5 546468300338232 4 086029056264687 5 005005283626573 4 935258239627312 3 494170998739258 5 537997178661033 5 320711100998849 7 3891120432406865 5 202969177309964 4 855297691835079 make corresponding y flags to detecting classes generate y values for corresponding classes param classcount number of classesdata groupings in dataset param instancecount number of instances in class return corresponding values for data groupings in dataset ygenerator1 10 0 0 0 0 0 0 0 0 0 0 ygenerator2 5 10 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 ygenerator4 10 5 15 20 doctest normalizewhitespace 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 calculate the class means calculate given class mean param instancecount number of instances in class param items items that related to specific classdata grouping return calculated actual mean of considered class items gaussiandistribution5 0 1 0 20 calculatemeanlenitems items 5 011267842911003 the sum of all items divided by number of instances calculate the class probabilities calculate the probability that a given instance will belong to which class param instancecount number of instances in class param totalcount the number of all instances return value of probability for considered class calculateprobabilities20 60 0 3333333333333333 calculateprobabilities30 100 0 3 number of instances in specific class divided by number of all instances calculate the variance calculate the variance param items a list containing all itemsgaussian distribution of all classes param means a list containing real mean values of each class param totalcount the number of all instances return calculated variance for considered dataset items gaussiandistribution5 0 1 0 20 means 5 011267842911003 totalcount 20 calculatevarianceitems means totalcount 0 9618530973487491 iterate over number of elements in items for loop iterates over number of elements in inner layer of items appending squared differences to squareddiff list one divided by the number of all instances number of classes multiplied by sum of all squared differences making predictions this function predicts new indexesgroups for our data param xitems a list containing all itemsgaussian distribution of all classes param means a list containing real mean values of each class param variance calculated value of variance by calculatevariance function param probabilities a list containing all probabilities of classes return a list containing predicted y values xitems 6 288184753155463 6 4494456086997705 5 066335808938262 4 235456349028368 3 9078267848958586 5 031334516831717 3 977896829989127 3 56317055489747 5 199311976483754 5 133374604658605 5 546468300338232 4 086029056264687 5 005005283626573 4 935258239627312 3 494170998739258 5 537997178661033 5 320711100998849 7 3891120432406865 5 202969177309964 4 855297691835079 11 288184753155463 11 44944560869977 10 066335808938263 9 235456349028368 8 907826784895859 10 031334516831716 8 977896829989128 8 56317055489747 10 199311976483754 10 133374604658606 10 546468300338232 9 086029056264687 10 005005283626572 9 935258239627313 8 494170998739259 10 537997178661033 10 320711100998848 12 389112043240686 10 202969177309964 9 85529769183508 16 288184753155463 16 449445608699772 15 066335808938263 14 235456349028368 13 907826784895859 15 031334516831716 13 977896829989128 13 56317055489747 15 199311976483754 15 133374604658606 15 546468300338232 14 086029056264687 15 005005283626572 14 935258239627313 13 494170998739259 15 537997178661033 15 320711100998848 17 389112043240686 15 202969177309964 14 85529769183508 means 5 011267842911003 10 011267842911003 15 011267842911002 variance 0 9618530973487494 probabilities 0 3333333333333333 0 3333333333333333 0 3333333333333333 predictyvaluesxitems means variance probabilities doctest normalizewhitespace 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 an empty list to store generated discriminant values of all items in dataset for each class for loop iterates over number of elements in list for loop iterates over number of inner items of each element for loop iterates over number of classes we have in our dataset appending values of discriminants for each class to temp list appending discriminant values of each item to results list calculating accuracy calculate the value of accuracy basedon predictions param actualy a list containing initial y values generated by ygenerator function param predictedy a list containing predicted y values generated by predictyvalues function return percentage of accuracy actualy 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 predictedy 0 0 0 1 1 1 0 0 1 1 0 0 0 0 1 1 1 0 1 1 1 accuracyactualy predictedy 50 0 actualy 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 predictedy 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 accuracyactualy predictedy 100 0 iterate over one element of each list at a time zip mode prediction is correct if actual y value equals to predicted y value percentage of accuracy equals to number of correct predictions divided by number of all data and multiplied by 100 ask for user value and validate that it fulfill a condition inputtype user input expected type of value inputmsg message to show user in the screen errmsg message to show in the screen in case of error condition function that represents the condition that user input is valid default default value in case the user does not type anything return user s input main function this function starts execution phase while true print linear discriminant analysis center50 print 50 n printfirst of all we should specify the number of classes that printwe want to generate as training dataset trying to get number of classes nclasses validinput inputtypeint conditionlambda x x 0 inputmsgenter the number of classes data groupings errmsgnumber of classes should be positive print 100 trying to get the value of standard deviation stddev validinput inputtypefloat conditionlambda x x 0 inputmsg enter the value of standard deviation default value is 1 0 for all classes errmsgstandard deviation should not be negative default1 0 print 100 trying to get number of instances in classes and theirs means to generate dataset counts an empty list to store instance counts of classes in dataset for i in rangenclasses usercount validinput inputtypeint conditionlambda x x 0 inputmsgfenter the number of instances for classi1 errmsgnumber of instances should be positive counts appendusercount print 100 an empty list to store values of userentered means of classes usermeans for a in rangenclasses usermean validinput inputtypefloat inputmsgfenter the value of mean for classa1 errmsgthis is an invalid value usermeans appendusermean print 100 printstandard deviation stddev print out the number of instances in classes in separated line for i count in enumeratecounts 1 printfnumber of instances in classi is count print 100 print out mean values of classes separated line for i usermean in enumerateusermeans 1 printfmean of classi is usermean print 100 generating training dataset drawn from gaussian distribution x gaussiandistributionusermeansj stddev countsj for j in rangenclasses printgenerated normal distribution n x print 100 generating ys to detecting corresponding classes y ygeneratornclasses counts printgenerated corresponding ys n y print 100 calculating the value of actual mean for each class actualmeans calculatemeancountsk xk for k in rangenclasses for loop iterates over number of elements in actualmeans list and print out them in separated line for i actualmean in enumerateactualmeans 1 printfactualreal mean of classi is actualmean print 100 calculating the value of probabilities for each class probabilities calculateprobabilitiescountsi sumcounts for i in rangenclasses for loop iterates over number of elements in probabilities list and print out them in separated line for i probability in enumerateprobabilities 1 printfprobability of classi is probability print 100 calculating the values of variance for each class variance calculatevariancex actualmeans sumcounts printvariance variance print 100 predicting y values storing predicted y values in preindexes variable preindexes predictyvaluesx actualmeans variance probabilities print 100 calculating accuracy of the model printfaccuracy accuracyy preindexes print 100 print done center100 if inputpress any key to restart or q for quit strip lower q printn goodbye center100 n break systemcls if name nt else clear noqa s605 if name main main make a training dataset drawn from a gaussian distribution generate gaussian distribution instances based on given mean and standard deviation param mean mean value of class param std_dev value of standard deviation entered by usr or default value of it param instance_count instance number of class return a list containing generated values based on given mean std_dev and instance_count gaussian_distribution 5 0 1 0 20 doctest normalize_whitespace 6 288184753155463 6 4494456086997705 5 066335808938262 4 235456349028368 3 9078267848958586 5 031334516831717 3 977896829989127 3 56317055489747 5 199311976483754 5 133374604658605 5 546468300338232 4 086029056264687 5 005005283626573 4 935258239627312 3 494170998739258 5 537997178661033 5 320711100998849 7 3891120432406865 5 202969177309964 4 855297691835079 make corresponding y flags to detecting classes generate y values for corresponding classes param class_count number of classes data groupings in dataset param instance_count number of instances in class return corresponding values for data groupings in dataset y_generator 1 10 0 0 0 0 0 0 0 0 0 0 y_generator 2 5 10 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 y_generator 4 10 5 15 20 doctest normalize_whitespace 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 calculate the class means calculate given class mean param instance_count number of instances in class param items items that related to specific class data grouping return calculated actual mean of considered class items gaussian_distribution 5 0 1 0 20 calculate_mean len items items 5 011267842911003 the sum of all items divided by number of instances calculate the class probabilities calculate the probability that a given instance will belong to which class param instance_count number of instances in class param total_count the number of all instances return value of probability for considered class calculate_probabilities 20 60 0 3333333333333333 calculate_probabilities 30 100 0 3 number of instances in specific class divided by number of all instances calculate the variance calculate the variance param items a list containing all items gaussian distribution of all classes param means a list containing real mean values of each class param total_count the number of all instances return calculated variance for considered dataset items gaussian_distribution 5 0 1 0 20 means 5 011267842911003 total_count 20 calculate_variance items means total_count 0 9618530973487491 an empty list to store all squared differences iterate over number of elements in items for loop iterates over number of elements in inner layer of items appending squared differences to squared_diff list one divided by the number of all instances number of classes multiplied by sum of all squared differences number of classes in dataset making predictions this function predicts new indexes groups for our data param x_items a list containing all items gaussian distribution of all classes param means a list containing real mean values of each class param variance calculated value of variance by calculate_variance function param probabilities a list containing all probabilities of classes return a list containing predicted y values x_items 6 288184753155463 6 4494456086997705 5 066335808938262 4 235456349028368 3 9078267848958586 5 031334516831717 3 977896829989127 3 56317055489747 5 199311976483754 5 133374604658605 5 546468300338232 4 086029056264687 5 005005283626573 4 935258239627312 3 494170998739258 5 537997178661033 5 320711100998849 7 3891120432406865 5 202969177309964 4 855297691835079 11 288184753155463 11 44944560869977 10 066335808938263 9 235456349028368 8 907826784895859 10 031334516831716 8 977896829989128 8 56317055489747 10 199311976483754 10 133374604658606 10 546468300338232 9 086029056264687 10 005005283626572 9 935258239627313 8 494170998739259 10 537997178661033 10 320711100998848 12 389112043240686 10 202969177309964 9 85529769183508 16 288184753155463 16 449445608699772 15 066335808938263 14 235456349028368 13 907826784895859 15 031334516831716 13 977896829989128 13 56317055489747 15 199311976483754 15 133374604658606 15 546468300338232 14 086029056264687 15 005005283626572 14 935258239627313 13 494170998739259 15 537997178661033 15 320711100998848 17 389112043240686 15 202969177309964 14 85529769183508 means 5 011267842911003 10 011267842911003 15 011267842911002 variance 0 9618530973487494 probabilities 0 3333333333333333 0 3333333333333333 0 3333333333333333 predict_y_values x_items means variance probabilities doctest normalize_whitespace 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 an empty list to store generated discriminant values of all items in dataset for each class for loop iterates over number of elements in list for loop iterates over number of inner items of each element to store all discriminant values of each item as a list for loop iterates over number of classes we have in our dataset appending values of discriminants for each class to temp list appending discriminant values of each item to results list calculating accuracy calculate the value of accuracy based on predictions param actual_y a list containing initial y values generated by y_generator function param predicted_y a list containing predicted y values generated by predict_y_values function return percentage of accuracy actual_y 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 predicted_y 0 0 0 1 1 1 0 0 1 1 0 0 0 0 1 1 1 0 1 1 1 accuracy actual_y predicted_y 50 0 actual_y 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 predicted_y 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 accuracy actual_y predicted_y 100 0 iterate over one element of each list at a time zip mode prediction is correct if actual y value equals to predicted y value percentage of accuracy equals to number of correct predictions divided by number of all data and multiplied by 100 usually float or int ask for user value and validate that it fulfill a condition input_type user input expected type of value input_msg message to show user in the screen err_msg message to show in the screen in case of error condition function that represents the condition that user input is valid default default value in case the user does not type anything return user s input main function this function starts execution phase trying to get number of classes trying to get the value of standard deviation trying to get number of instances in classes and theirs means to generate dataset an empty list to store instance counts of classes in dataset an empty list to store values of user entered means of classes print out the number of instances in classes in separated line print out mean values of classes separated line generating training dataset drawn from gaussian distribution generating ys to detecting corresponding classes calculating the value of actual mean for each class for loop iterates over number of elements in actual_means list and print out them in separated line calculating the value of probabilities for each class for loop iterates over number of elements in probabilities list and print out them in separated line calculating the values of variance for each class predicting y values storing predicted y values in pre_indexes variable calculating accuracy of the model noqa s605
from collections.abc import Callable from math import log from os import name, system from random import gauss, seed from typing import TypeVar def gaussian_distribution(mean: float, std_dev: float, instance_count: int) -> list: seed(1) return [gauss(mean, std_dev) for _ in range(instance_count)] def y_generator(class_count: int, instance_count: list) -> list: return [k for k in range(class_count) for _ in range(instance_count[k])] def calculate_mean(instance_count: int, items: list) -> float: return sum(items) / instance_count def calculate_probabilities(instance_count: int, total_count: int) -> float: return instance_count / total_count def calculate_variance(items: list, means: list, total_count: int) -> float: squared_diff = [] for i in range(len(items)): for j in range(len(items[i])): squared_diff.append((items[i][j] - means[i]) ** 2) n_classes = len(means) return 1 / (total_count - n_classes) * sum(squared_diff) def predict_y_values( x_items: list, means: list, variance: float, probabilities: list ) -> list: results = [] for i in range(len(x_items)): for j in range(len(x_items[i])): temp = [] for k in range(len(x_items)): temp.append( x_items[i][j] * (means[k] / variance) - (means[k] ** 2 / (2 * variance)) + log(probabilities[k]) ) results.append(temp) return [result.index(max(result)) for result in results] def accuracy(actual_y: list, predicted_y: list) -> float: correct = sum(1 for i, j in zip(actual_y, predicted_y) if i == j) return (correct / len(actual_y)) * 100 num = TypeVar("num") def valid_input( input_type: Callable[[object], num], input_msg: str, err_msg: str, condition: Callable[[num], bool] = lambda x: True, default: str | None = None, ) -> num: while True: try: user_input = input_type(input(input_msg).strip() or default) if condition(user_input): return user_input else: print(f"{user_input}: {err_msg}") continue except ValueError: print( f"{user_input}: Incorrect input type, expected {input_type.__name__!r}" ) def main(): while True: print(" Linear Discriminant Analysis ".center(50, "*")) print("*" * 50, "\n") print("First of all we should specify the number of classes that") print("we want to generate as training dataset") n_classes = valid_input( input_type=int, condition=lambda x: x > 0, input_msg="Enter the number of classes (Data Groupings): ", err_msg="Number of classes should be positive!", ) print("-" * 100) std_dev = valid_input( input_type=float, condition=lambda x: x >= 0, input_msg=( "Enter the value of standard deviation" "(Default value is 1.0 for all classes): " ), err_msg="Standard deviation should not be negative!", default="1.0", ) print("-" * 100) counts = [] for i in range(n_classes): user_count = valid_input( input_type=int, condition=lambda x: x > 0, input_msg=(f"Enter The number of instances for class_{i+1}: "), err_msg="Number of instances should be positive!", ) counts.append(user_count) print("-" * 100) user_means = [] for a in range(n_classes): user_mean = valid_input( input_type=float, input_msg=(f"Enter the value of mean for class_{a+1}: "), err_msg="This is an invalid value.", ) user_means.append(user_mean) print("-" * 100) print("Standard deviation: ", std_dev) for i, count in enumerate(counts, 1): print(f"Number of instances in class_{i} is: {count}") print("-" * 100) for i, user_mean in enumerate(user_means, 1): print(f"Mean of class_{i} is: {user_mean}") print("-" * 100) x = [ gaussian_distribution(user_means[j], std_dev, counts[j]) for j in range(n_classes) ] print("Generated Normal Distribution: \n", x) print("-" * 100) y = y_generator(n_classes, counts) print("Generated Corresponding Ys: \n", y) print("-" * 100) actual_means = [calculate_mean(counts[k], x[k]) for k in range(n_classes)] for i, actual_mean in enumerate(actual_means, 1): print(f"Actual(Real) mean of class_{i} is: {actual_mean}") print("-" * 100) probabilities = [ calculate_probabilities(counts[i], sum(counts)) for i in range(n_classes) ] for i, probability in enumerate(probabilities, 1): print(f"Probability of class_{i} is: {probability}") print("-" * 100) variance = calculate_variance(x, actual_means, sum(counts)) print("Variance: ", variance) print("-" * 100) pre_indexes = predict_y_values(x, actual_means, variance, probabilities) print("-" * 100) print(f"Accuracy: {accuracy(y, pre_indexes)}") print("-" * 100) print(" DONE ".center(100, "+")) if input("Press any key to restart or 'q' for quit: ").strip().lower() == "q": print("\n" + "GoodBye!".center(100, "-") + "\n") break system("cls" if name == "nt" else "clear") if __name__ == "__main__": main()
linear regression is the most basic type of regression commonly used for predictive analysis the idea is pretty simple we have a dataset and we have features associated with it features should be chosen very cautiously as they determine how much our model will be able to make future predictions we try to set the weight of these features over many iterations so that they best fit our dataset in this particular code i had used a csgo dataset adr vs rating we try to best fit a line through dataset and estimate the parameters collect dataset of csgo the dataset contains adr vs rating of a player return dataset obtained from the link as matrix run steep gradient descent and updates the feature vector accordingly param datax contains the dataset param datay contains the output associated with each dataentry param lendata length of the data param alpha learning rate of the model param theta feature vector weight s for our model param return updated feature s using currfeatures alpha gradientw r t feature return sum of square error for error calculation param datax contains our dataset param datay contains the output result vector param lendata len of the dataset param theta contains the feature vector return sum of square error computed from given feature s implement linear regression over the dataset param datax contains our dataset param datay contains the output result vector return feature for line of best fit feature vector return sum of square error for error calculation param predictedy contains the output of prediction result vector param originaly contains values of expected outcome return mean absolute error computed from given feature s driver function data collectdataset lendata data shape0 datax np cnp oneslendata data 1 astypefloat datay data 1 astypefloat theta runlinearregressiondatax datay lenresult theta shape1 printresultant feature vector for i in rangelenresult printftheta0 i 5f if name main main collect dataset of csgo the dataset contains adr vs rating of a player return dataset obtained from the link as matrix this is for removing the labels from the list run steep gradient descent and updates the feature vector accordingly_ param data_x contains the dataset param data_y contains the output associated with each data entry param len_data length of the data_ param alpha learning rate of the model param theta feature vector weight s for our model param return updated feature s using curr_features alpha_ gradient w r t feature return sum of square error for error calculation param data_x contains our dataset param data_y contains the output result vector param len_data len of the dataset param theta contains the feature vector return sum of square error computed from given feature s implement linear regression over the dataset param data_x contains our dataset param data_y contains the output result vector return feature for line of best fit feature vector return sum of square error for error calculation param predicted_y contains the output of prediction result vector param original_y contains values of expected outcome return mean absolute error computed from given feature s driver function
import numpy as np import requests def collect_dataset(): response = requests.get( "https://raw.githubusercontent.com/yashLadha/The_Math_of_Intelligence/" "master/Week1/ADRvsRating.csv" ) lines = response.text.splitlines() data = [] for item in lines: item = item.split(",") data.append(item) data.pop(0) dataset = np.matrix(data) return dataset def run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta): n = len_data prod = np.dot(theta, data_x.transpose()) prod -= data_y.transpose() sum_grad = np.dot(prod, data_x) theta = theta - (alpha / n) * sum_grad return theta def sum_of_square_error(data_x, data_y, len_data, theta): prod = np.dot(theta, data_x.transpose()) prod -= data_y.transpose() sum_elem = np.sum(np.square(prod)) error = sum_elem / (2 * len_data) return error def run_linear_regression(data_x, data_y): iterations = 100000 alpha = 0.0001550 no_features = data_x.shape[1] len_data = data_x.shape[0] - 1 theta = np.zeros((1, no_features)) for i in range(iterations): theta = run_steep_gradient_descent(data_x, data_y, len_data, alpha, theta) error = sum_of_square_error(data_x, data_y, len_data, theta) print(f"At Iteration {i + 1} - Error is {error:.5f}") return theta def mean_absolute_error(predicted_y, original_y): total = sum(abs(y - predicted_y[i]) for i, y in enumerate(original_y)) return total / len(original_y) def main(): data = collect_dataset() len_data = data.shape[0] data_x = np.c_[np.ones(len_data), data[:, :-1]].astype(float) data_y = data[:, -1].astype(float) theta = run_linear_regression(data_x, data_y) len_result = theta.shape[1] print("Resultant Feature vector : ") for i in range(len_result): print(f"{theta[0, i]:.5f}") if __name__ == "__main__": main()
locally weighted linear regression also called local regression is a type of nonparametric linear regression that prioritizes data closest to a given prediction point the algorithm estimates the vector of model coefficients using weighted least squares regression xwxxwy where x is the design matrix y is the response vector and w is the diagonal weight matrix this implementation calculates w the weight of the ith training sample using the gaussian weight w expx x2 where x is the ith training sample x is the prediction point is the bandwidth and x is the euclidean norm also called the 2norm or the l norm the bandwidth controls how quickly the weight of a training sample decreases as its distance from the prediction point increases one can think of the gaussian weight as a bell curve centered around the prediction point a training sample is weighted lower if it s farther from the center and controls the spread of the bell curve other types of locally weighted regression such as locally estimated scatterplot smoothing loess typically use different weight functions references https en wikipedia orgwikilocalregression https en wikipedia orgwikiweightedleastsquares https cs229 stanford edunotes2022fallmainnotes pdf calculate the weight of every point in the training data around a given prediction point args point xvalue at which the prediction is being made xtrain ndarray of xvalues for training tau bandwidth value controls how quickly the weight of training values decreases as the distance from the prediction point increases returns m x m weight matrix around the prediction point where m is the size of the training set weightmatrix np array1 1 np array16 99 10 34 21 01 23 68 24 59 25 69 0 6 array1 43807972e207 0 00000000e000 0 00000000e000 0 00000000e000 0 00000000e000 0 00000000e000 0 00000000e000 0 00000000e000 0 00000000e000 calculate the local weights at a given prediction point using the weight matrix for that point args point xvalue at which the prediction is being made xtrain ndarray of xvalues for training ytrain ndarray of yvalues for training tau bandwidth value controls how quickly the weight of training values decreases as the distance from the prediction point increases returns ndarray of local weights localweight np array1 1 np array16 99 10 34 21 01 23 68 24 59 25 69 np array1 01 1 66 3 5 0 6 array0 00873174 0 08272556 calculate predictions for each point in the training data args xtrain ndarray of xvalues for training ytrain ndarray of yvalues for training tau bandwidth value controls how quickly the weight of training values decreases as the distance from the prediction point increases returns ndarray of predictions localweightregression np array16 99 10 34 21 01 23 68 24 59 25 69 np array1 01 1 66 3 5 0 6 array1 07173261 1 65970737 3 50160179 load data from seaborn and split it into x and y points pass no doctests function is for demo purposes only pairing elements of one and xdata plot predictions and display the graph pass no doctests function is for demo purposes only demo with a dataset from the seaborn module calculate the weight of every point in the training data around a given prediction point args point x value at which the prediction is being made x_train ndarray of x values for training tau bandwidth value controls how quickly the weight of training values decreases as the distance from the prediction point increases returns m x m weight matrix around the prediction point where m is the size of the training set weight_matrix np array 1 1 np array 16 99 10 34 21 01 23 68 24 59 25 69 0 6 array 1 43807972e 207 0 00000000e 000 0 00000000e 000 0 00000000e 000 0 00000000e 000 0 00000000e 000 0 00000000e 000 0 00000000e 000 0 00000000e 000 number of training samples initialize weights as identity matrix calculate the local weights at a given prediction point using the weight matrix for that point args point x value at which the prediction is being made x_train ndarray of x values for training y_train ndarray of y values for training tau bandwidth value controls how quickly the weight of training values decreases as the distance from the prediction point increases returns ndarray of local weights local_weight np array 1 1 np array 16 99 10 34 21 01 23 68 24 59 25 69 np array 1 01 1 66 3 5 0 6 array 0 00873174 0 08272556 calculate predictions for each point in the training data args x_train ndarray of x values for training y_train ndarray of y values for training tau bandwidth value controls how quickly the weight of training values decreases as the distance from the prediction point increases returns ndarray of predictions local_weight_regression np array 16 99 10 34 21 01 23 68 24 59 25 69 np array 1 01 1 66 3 5 0 6 array 1 07173261 1 65970737 3 50160179 initialize array of predictions load data from seaborn and split it into x and y points pass no doctests function is for demo purposes only pairing elements of one and x_data plot predictions and display the graph pass no doctests function is for demo purposes only demo with a dataset from the seaborn module
import matplotlib.pyplot as plt import numpy as np def weight_matrix(point: np.ndarray, x_train: np.ndarray, tau: float) -> np.ndarray: m = len(x_train) weights = np.eye(m) for j in range(m): diff = point - x_train[j] weights[j, j] = np.exp(diff @ diff.T / (-2.0 * tau**2)) return weights def local_weight( point: np.ndarray, x_train: np.ndarray, y_train: np.ndarray, tau: float ) -> np.ndarray: weight_mat = weight_matrix(point, x_train, tau) weight = np.linalg.inv(x_train.T @ weight_mat @ x_train) @ ( x_train.T @ weight_mat @ y_train.T ) return weight def local_weight_regression( x_train: np.ndarray, y_train: np.ndarray, tau: float ) -> np.ndarray: y_pred = np.zeros(len(x_train)) for i, item in enumerate(x_train): y_pred[i] = np.dot(item, local_weight(item, x_train, y_train, tau)).item() return y_pred def load_data( dataset_name: str, x_name: str, y_name: str ) -> tuple[np.ndarray, np.ndarray, np.ndarray]: import seaborn as sns data = sns.load_dataset(dataset_name) x_data = np.array(data[x_name]) y_data = np.array(data[y_name]) one = np.ones(len(y_data)) x_train = np.column_stack((one, x_data)) return x_train, x_data, y_data def plot_preds( x_train: np.ndarray, preds: np.ndarray, x_data: np.ndarray, y_data: np.ndarray, x_name: str, y_name: str, ) -> None: x_train_sorted = np.sort(x_train, axis=0) plt.scatter(x_data, y_data, color="blue") plt.plot( x_train_sorted[:, 1], preds[x_train[:, 1].argsort(0)], color="yellow", linewidth=5, ) plt.title("Local Weighted Regression") plt.xlabel(x_name) plt.ylabel(y_name) plt.show() if __name__ == "__main__": import doctest doctest.testmod() training_data_x, total_bill, tip = load_data("tips", "total_bill", "tip") predictions = local_weight_regression(training_data_x, tip, 5) plot_preds(training_data_x, predictions, total_bill, tip, "total_bill", "tip")
usrbinpython logistic regression from scratch in62 in63 importing all the required libraries implementing logistic regression for classification problem helpful resources coursera ml course https medium commartinpellalogisticregressionfromscratchinpython124c5636b8ac getipython runlinemagic matplotlib inline in67 sigmoid function or logistic function is used as a hypothesis function in classification problems also known as logistic function 1 fx 1 e the sigmoid function approaches a value of 1 as its input x becomes increasing positive opposite for negative values reference https en wikipedia orgwikisigmoidfunction param z input to the function returns returns value in the range 0 to 1 examples sigmoidfunction4 0 9820137900379085 sigmoidfunctionnp array3 3 array0 04742587 0 95257413 sigmoidfunctionnp array3 3 1 array0 04742587 0 95257413 0 73105858 sigmoidfunctionnp array0 01 2 1 9 array0 49750002 0 11920292 0 13010847 sigmoidfunctionnp array1 3 5 3 12 array0 21416502 0 9950332 0 99999386 sigmoidfunctionnp array0 01 0 02 4 1 array0 50249998 0 50499983 0 9836975 sigmoidfunctionnp array0 8 array0 68997448 cost function quantifies the error between predicted and expected values the cost function used in logistic regression is called log loss or cross entropy function j 1m y loghx 1 y log1 hx where j is the cost that we want to minimize during training m is the number of training examples represents the summation over all training examples y is the actual binary label 0 or 1 for a given example hx is the predicted probability that x belongs to the positive class param h the output of sigmoid function it is the estimated probability that the input example x belongs to the positive class param y the actual binary label associated with input example x examples estimations sigmoidfunctionnp array0 3 4 3 8 1 costfunctionhestimations ynp array1 0 1 0 18937868932131605 estimations sigmoidfunctionnp array4 3 1 costfunctionhestimations ynp array1 0 0 1 459999655669926 estimations sigmoidfunctionnp array4 3 1 costfunctionhestimations ynp array1 0 0 0 1266663223365915 estimations sigmoidfunction0 costfunctionhestimations ynp array1 0 6931471805599453 references https en wikipedia orgwikilogisticregression here alpha is the learning rate x is the feature matrix y is the target matrix in68 usr bin python logistic regression from scratch in 62 in 63 importing all the required libraries implementing logistic regression for classification problem helpful resources coursera ml course https medium com martinpella logistic regression from scratch in python 124c5636b8ac get_ipython run_line_magic matplotlib inline in 67 sigmoid function or logistic function is used as a hypothesis function in classification problems also known as logistic function 1 f x 1 e ˣ the sigmoid function approaches a value of 1 as its input x becomes increasing positive opposite for negative values reference https en wikipedia org wiki sigmoid_function param z input to the function returns returns value in the range 0 to 1 examples sigmoid_function 4 0 9820137900379085 sigmoid_function np array 3 3 array 0 04742587 0 95257413 sigmoid_function np array 3 3 1 array 0 04742587 0 95257413 0 73105858 sigmoid_function np array 0 01 2 1 9 array 0 49750002 0 11920292 0 13010847 sigmoid_function np array 1 3 5 3 12 array 0 21416502 0 9950332 0 99999386 sigmoid_function np array 0 01 0 02 4 1 array 0 50249998 0 50499983 0 9836975 sigmoid_function np array 0 8 array 0 68997448 cost function quantifies the error between predicted and expected values the cost function used in logistic regression is called log loss or cross entropy function j θ 1 m σ y log hθ x 1 y log 1 hθ x where j θ is the cost that we want to minimize during training m is the number of training examples σ represents the summation over all training examples y is the actual binary label 0 or 1 for a given example hθ x is the predicted probability that x belongs to the positive class param h the output of sigmoid function it is the estimated probability that the input example x belongs to the positive class param y the actual binary label associated with input example x examples estimations sigmoid_function np array 0 3 4 3 8 1 cost_function h estimations y np array 1 0 1 0 18937868932131605 estimations sigmoid_function np array 4 3 1 cost_function h estimations y np array 1 0 0 1 459999655669926 estimations sigmoid_function np array 4 3 1 cost_function h estimations y np array 1 0 0 0 1266663223365915 estimations sigmoid_function 0 cost_function h estimations y np array 1 0 6931471805599453 references https en wikipedia org wiki logistic_regression here alpha is the learning rate x is the feature matrix y is the target matrix updating the weights printing the loss after every 100 iterations in 68 printing the theta i e our weights vector predicting the value of probability from the logistic regression algorithm
import numpy as np from matplotlib import pyplot as plt from sklearn import datasets def sigmoid_function(z: float | np.ndarray) -> float | np.ndarray: return 1 / (1 + np.exp(-z)) def cost_function(h: np.ndarray, y: np.ndarray) -> float: return (-y * np.log(h) - (1 - y) * np.log(1 - h)).mean() def log_likelihood(x, y, weights): scores = np.dot(x, weights) return np.sum(y * scores - np.log(1 + np.exp(scores))) def logistic_reg(alpha, x, y, max_iterations=70000): theta = np.zeros(x.shape[1]) for iterations in range(max_iterations): z = np.dot(x, theta) h = sigmoid_function(z) gradient = np.dot(x.T, h - y) / y.size theta = theta - alpha * gradient z = np.dot(x, theta) h = sigmoid_function(z) j = cost_function(h, y) if iterations % 100 == 0: print(f"loss: {j} \t") return theta if __name__ == "__main__": import doctest doctest.testmod() iris = datasets.load_iris() x = iris.data[:, :2] y = (iris.target != 0) * 1 alpha = 0.1 theta = logistic_reg(alpha, x, y, max_iterations=70000) print("theta: ", theta) def predict_prob(x): return sigmoid_function( np.dot(x, theta) ) plt.figure(figsize=(10, 6)) plt.scatter(x[y == 0][:, 0], x[y == 0][:, 1], color="b", label="0") plt.scatter(x[y == 1][:, 0], x[y == 1][:, 1], color="r", label="1") (x1_min, x1_max) = (x[:, 0].min(), x[:, 0].max()) (x2_min, x2_max) = (x[:, 1].min(), x[:, 1].max()) (xx1, xx2) = np.meshgrid(np.linspace(x1_min, x1_max), np.linspace(x2_min, x2_max)) grid = np.c_[xx1.ravel(), xx2.ravel()] probs = predict_prob(grid).reshape(xx1.shape) plt.contour(xx1, xx2, probs, [0.5], linewidths=1, colors="black") plt.legend() plt.show()
calculate the mean binary crossentropy bce loss between true labels and predicted probabilities bce loss quantifies dissimilarity between true labels 0 or 1 and predicted probabilities it s widely used in binary classification tasks bce ytrue lnypred 1 ytrue ln1 ypred reference https en wikipedia orgwikicrossentropy parameters ytrue true binary labels 0 or 1 ypred predicted probabilities for class 1 epsilon small constant to avoid numerical instability truelabels np array0 1 1 0 1 predictedprobs np array0 2 0 7 0 9 0 3 0 8 binarycrossentropytruelabels predictedprobs 0 2529995012327421 truelabels np array0 1 1 0 1 predictedprobs np array0 3 0 8 0 9 0 2 binarycrossentropytruelabels predictedprobs traceback most recent call last valueerror input arrays must have the same length calculate the mean binary focal crossentropy bfce loss between true labels and predicted probabilities bfce loss quantifies dissimilarity between true labels 0 or 1 and predicted probabilities it s a variation of binary crossentropy that addresses class imbalance by focusing on hard examples bcfe alpha 1 ypredgamma ytrue logypred 1 alpha ypredgamma 1 ytrue log1 ypred reference lin et al 2018https arxiv orgpdf1708 02002 pdf parameters ytrue true binary labels 0 or 1 ypred predicted probabilities for class 1 gamma focusing parameter for modulating the loss default 2 0 alpha weighting factor for class 1 default 0 25 epsilon small constant to avoid numerical instability truelabels np array0 1 1 0 1 predictedprobs np array0 2 0 7 0 9 0 3 0 8 binaryfocalcrossentropytruelabels predictedprobs 0 008257977659239775 truelabels np array0 1 1 0 1 predictedprobs np array0 3 0 8 0 9 0 2 binaryfocalcrossentropytruelabels predictedprobs traceback most recent call last valueerror input arrays must have the same length clip predicted probabilities to avoid log0 calculate categorical crossentropy cce loss between true class labels and predicted class probabilities cce ytrue lnypred reference https en wikipedia orgwikicrossentropy parameters ytrue true class labels onehot encoded ypred predicted class probabilities epsilon small constant to avoid numerical instability truelabels np array1 0 0 0 1 0 0 0 1 predprobs np array0 9 0 1 0 0 0 2 0 7 0 1 0 0 0 1 0 9 categoricalcrossentropytruelabels predprobs 0 567395975254385 truelabels np array1 0 0 1 predprobs np array0 9 0 1 0 0 0 2 0 7 0 1 categoricalcrossentropytruelabels predprobs traceback most recent call last valueerror input arrays must have the same shape truelabels np array2 0 1 1 0 0 predprobs np array0 9 0 1 0 0 0 2 0 7 0 1 categoricalcrossentropytruelabels predprobs traceback most recent call last valueerror ytrue must be onehot encoded truelabels np array1 0 1 1 0 0 predprobs np array0 9 0 1 0 0 0 2 0 7 0 1 categoricalcrossentropytruelabels predprobs traceback most recent call last valueerror ytrue must be onehot encoded truelabels np array1 0 0 0 1 0 predprobs np array0 9 0 1 0 1 0 2 0 7 0 1 categoricalcrossentropytruelabels predprobs traceback most recent call last valueerror predicted probabilities must sum to approximately 1 calculate the mean hinge loss for between true labels and predicted probabilities for training support vector machines svms hinge loss max0 1 true pred reference https en wikipedia orgwikihingeloss args ytrue actual values ground truth encoded as 1 or 1 ypred predicted values truelabels np array1 1 1 1 1 pred np array4 0 3 0 7 5 10 hingelosstruelabels pred 1 52 truelabels np array1 1 1 1 1 1 pred np array4 0 3 0 7 5 10 hingelosstruelabels pred traceback most recent call last valueerror length of predicted and actual array must be same truelabels np array1 1 10 1 1 pred np array4 0 3 0 7 5 10 hingelosstruelabels pred traceback most recent call last valueerror ytrue can have values 1 or 1 only calculate the mean huber loss between the given ground truth and predicted values the huber loss describes the penalty incurred by an estimation procedure and it serves as a measure of accuracy for regression models huber loss 0 5 ytrue ypred2 if ytrue ypred delta delta ytrue ypred 0 5 delta2 otherwise reference https en wikipedia orgwikihuberloss parameters ytrue the true values ground truth ypred the predicted values truevalues np array0 9 10 0 2 0 1 0 5 2 predictedvalues np array0 8 2 1 2 9 4 2 5 2 np isclosehuberlosstruevalues predictedvalues 1 0 2 102 true truelabels np array11 0 21 0 3 32 4 0 5 0 predictedprobs np array8 3 20 8 2 9 11 2 5 0 np isclosehuberlosstruelabels predictedprobs 1 0 1 80164 true truelabels np array11 0 21 0 3 32 4 0 predictedprobs np array8 3 20 8 2 9 11 2 5 0 huberlosstruelabels predictedprobs 1 0 traceback most recent call last valueerror input arrays must have the same length calculate the mean squared error mse between ground truth and predicted values mse measures the squared difference between true values and predicted values and it serves as a measure of accuracy for regression models mse 1n ytrue ypred2 reference https en wikipedia orgwikimeansquarederror parameters ytrue the true values ground truth ypred the predicted values truevalues np array1 0 2 0 3 0 4 0 5 0 predictedvalues np array0 8 2 1 2 9 4 2 5 2 np isclosemeansquarederrortruevalues predictedvalues 0 028 true truelabels np array1 0 2 0 3 0 4 0 5 0 predictedprobs np array0 3 0 8 0 9 0 2 meansquarederrortruelabels predictedprobs traceback most recent call last valueerror input arrays must have the same length calculates the mean absolute error mae between ground truth observed and predicted values mae measures the absolute difference between true values and predicted values equation mae 1n absytrue ypred reference https en wikipedia orgwikimeanabsoluteerror parameters ytrue the true values ground truth ypred the predicted values truevalues np array1 0 2 0 3 0 4 0 5 0 predictedvalues np array0 8 2 1 2 9 4 2 5 2 np isclosemeanabsoluteerrortruevalues predictedvalues 0 16 true truevalues np array1 0 2 0 3 0 4 0 5 0 predictedvalues np array0 8 2 1 2 9 4 2 5 2 np isclosemeanabsoluteerrortruevalues predictedvalues 2 16 false truelabels np array1 0 2 0 3 0 4 0 5 0 predictedprobs np array0 3 0 8 0 9 5 2 meanabsoluteerrortruelabels predictedprobs traceback most recent call last valueerror input arrays must have the same length calculate the mean squared logarithmic error msle between ground truth and predicted values msle measures the squared logarithmic difference between true values and predicted values for regression models it s particularly useful for dealing with skewed or largevalue data and it s often used when the relative differences between predicted and true values are more important than absolute differences msle 1n log1 ytrue log1 ypred2 reference https insideaiml comblogmeansquaredlogarithmicerrorloss1035 parameters ytrue the true values ground truth ypred the predicted values truevalues np array1 0 2 0 3 0 4 0 5 0 predictedvalues np array0 8 2 1 2 9 4 2 5 2 meansquaredlogarithmicerrortruevalues predictedvalues 0 0030860877925181344 truelabels np array1 0 2 0 3 0 4 0 5 0 predictedprobs np array0 3 0 8 0 9 0 2 meansquaredlogarithmicerrortruelabels predictedprobs traceback most recent call last valueerror input arrays must have the same length calculate the mean absolute percentage error between ytrue and ypred mean absolute percentage error calculates the average of the absolute percentage differences between the predicted and true values formula ytrueiyprediytruein source https stephenallwright comgoodmapescore parameters ytrue np ndarray numpy array containing truetarget values ypred np ndarray numpy array containing predicted values returns float the mean absolute percentage error between ytrue and ypred examples ytrue np array10 20 30 40 ypred np array12 18 33 45 meanabsolutepercentageerrorytrue ypred 0 13125 ytrue np array1 2 3 4 ypred np array2 3 4 5 meanabsolutepercentageerrorytrue ypred 0 5208333333333333 ytrue np array34 37 44 47 48 48 46 43 32 27 26 24 ypred np array37 40 46 44 46 50 45 44 34 30 22 23 meanabsolutepercentageerrorytrue ypred 0 064671076436071 calculate the perplexity for the ytrue and ypred compute the perplexity which useful in predicting language model accuracy in natural language processing nlp perplexity is measure of how certain the model in its predictions perplexity loss exp1n lnpx reference https en wikipedia orgwikiperplexity args ytrue actual label encoded sentences of shape batchsize sentencelength ypred predicted sentences of shape batchsize sentencelength vocabsize epsilon small floating point number to avoid getting inf for log0 returns perplexity loss between ytrue and ypred ytrue np array1 4 2 3 ypred np array 0 28 0 19 0 21 0 15 0 15 0 24 0 19 0 09 0 18 0 27 0 03 0 26 0 21 0 18 0 30 0 28 0 10 0 33 0 15 0 12 perplexitylossytrue ypred 5 0247347775367945 ytrue np array1 4 2 3 ypred np array 0 28 0 19 0 21 0 15 0 15 0 24 0 19 0 09 0 18 0 27 0 30 0 10 0 20 0 15 0 25 0 03 0 26 0 21 0 18 0 30 0 28 0 10 0 33 0 15 0 12 0 30 0 10 0 20 0 15 0 25 perplexitylossytrue ypred traceback most recent call last valueerror sentence length of ytrue and ypred must be equal ytrue np array1 4 2 11 ypred np array 0 28 0 19 0 21 0 15 0 15 0 24 0 19 0 09 0 18 0 27 0 03 0 26 0 21 0 18 0 30 0 28 0 10 0 33 0 15 0 12 perplexitylossytrue ypred traceback most recent call last valueerror label value must not be greater than vocabulary size ytrue np array1 4 ypred np array 0 28 0 19 0 21 0 15 0 15 0 24 0 19 0 09 0 18 0 27 0 03 0 26 0 21 0 18 0 30 0 28 0 10 0 33 0 15 0 12 perplexitylossytrue ypred traceback most recent call last valueerror batch size of ytrue and ypred must be equal matrix to select prediction value only for true class getting the matrix containing prediction for only true class calculating perplexity for each sentence calculate the mean binary cross entropy bce loss between true labels and predicted probabilities bce loss quantifies dissimilarity between true labels 0 or 1 and predicted probabilities it s widely used in binary classification tasks bce σ y_true ln y_pred 1 y_true ln 1 y_pred reference https en wikipedia org wiki cross_entropy parameters y_true true binary labels 0 or 1 y_pred predicted probabilities for class 1 epsilon small constant to avoid numerical instability true_labels np array 0 1 1 0 1 predicted_probs np array 0 2 0 7 0 9 0 3 0 8 binary_cross_entropy true_labels predicted_probs 0 2529995012327421 true_labels np array 0 1 1 0 1 predicted_probs np array 0 3 0 8 0 9 0 2 binary_cross_entropy true_labels predicted_probs traceback most recent call last valueerror input arrays must have the same length clip predictions to avoid log 0 calculate the mean binary focal cross entropy bfce loss between true labels and predicted probabilities bfce loss quantifies dissimilarity between true labels 0 or 1 and predicted probabilities it s a variation of binary cross entropy that addresses class imbalance by focusing on hard examples bcfe σ alpha 1 y_pred gamma y_true log y_pred 1 alpha y_pred gamma 1 y_true log 1 y_pred reference lin et al 2018 https arxiv org pdf 1708 02002 pdf parameters y_true true binary labels 0 or 1 y_pred predicted probabilities for class 1 gamma focusing parameter for modulating the loss default 2 0 alpha weighting factor for class 1 default 0 25 epsilon small constant to avoid numerical instability true_labels np array 0 1 1 0 1 predicted_probs np array 0 2 0 7 0 9 0 3 0 8 binary_focal_cross_entropy true_labels predicted_probs 0 008257977659239775 true_labels np array 0 1 1 0 1 predicted_probs np array 0 3 0 8 0 9 0 2 binary_focal_cross_entropy true_labels predicted_probs traceback most recent call last valueerror input arrays must have the same length clip predicted probabilities to avoid log 0 calculate categorical cross entropy cce loss between true class labels and predicted class probabilities cce σ y_true ln y_pred reference https en wikipedia org wiki cross_entropy parameters y_true true class labels one hot encoded y_pred predicted class probabilities epsilon small constant to avoid numerical instability true_labels np array 1 0 0 0 1 0 0 0 1 pred_probs np array 0 9 0 1 0 0 0 2 0 7 0 1 0 0 0 1 0 9 categorical_cross_entropy true_labels pred_probs 0 567395975254385 true_labels np array 1 0 0 1 pred_probs np array 0 9 0 1 0 0 0 2 0 7 0 1 categorical_cross_entropy true_labels pred_probs traceback most recent call last valueerror input arrays must have the same shape true_labels np array 2 0 1 1 0 0 pred_probs np array 0 9 0 1 0 0 0 2 0 7 0 1 categorical_cross_entropy true_labels pred_probs traceback most recent call last valueerror y_true must be one hot encoded true_labels np array 1 0 1 1 0 0 pred_probs np array 0 9 0 1 0 0 0 2 0 7 0 1 categorical_cross_entropy true_labels pred_probs traceback most recent call last valueerror y_true must be one hot encoded true_labels np array 1 0 0 0 1 0 pred_probs np array 0 9 0 1 0 1 0 2 0 7 0 1 categorical_cross_entropy true_labels pred_probs traceback most recent call last valueerror predicted probabilities must sum to approximately 1 clip predictions to avoid log 0 calculate the mean hinge loss for between true labels and predicted probabilities for training support vector machines svms hinge loss max 0 1 true pred reference https en wikipedia org wiki hinge_loss args y_true actual values ground truth encoded as 1 or 1 y_pred predicted values true_labels np array 1 1 1 1 1 pred np array 4 0 3 0 7 5 10 hinge_loss true_labels pred 1 52 true_labels np array 1 1 1 1 1 1 pred np array 4 0 3 0 7 5 10 hinge_loss true_labels pred traceback most recent call last valueerror length of predicted and actual array must be same true_labels np array 1 1 10 1 1 pred np array 4 0 3 0 7 5 10 hinge_loss true_labels pred traceback most recent call last valueerror y_true can have values 1 or 1 only calculate the mean huber loss between the given ground truth and predicted values the huber loss describes the penalty incurred by an estimation procedure and it serves as a measure of accuracy for regression models huber loss 0 5 y_true y_pred 2 if y_true y_pred delta delta y_true y_pred 0 5 delta 2 otherwise reference https en wikipedia org wiki huber_loss parameters y_true the true values ground truth y_pred the predicted values true_values np array 0 9 10 0 2 0 1 0 5 2 predicted_values np array 0 8 2 1 2 9 4 2 5 2 np isclose huber_loss true_values predicted_values 1 0 2 102 true true_labels np array 11 0 21 0 3 32 4 0 5 0 predicted_probs np array 8 3 20 8 2 9 11 2 5 0 np isclose huber_loss true_labels predicted_probs 1 0 1 80164 true true_labels np array 11 0 21 0 3 32 4 0 predicted_probs np array 8 3 20 8 2 9 11 2 5 0 huber_loss true_labels predicted_probs 1 0 traceback most recent call last valueerror input arrays must have the same length calculate the mean squared error mse between ground truth and predicted values mse measures the squared difference between true values and predicted values and it serves as a measure of accuracy for regression models mse 1 n σ y_true y_pred 2 reference https en wikipedia org wiki mean_squared_error parameters y_true the true values ground truth y_pred the predicted values true_values np array 1 0 2 0 3 0 4 0 5 0 predicted_values np array 0 8 2 1 2 9 4 2 5 2 np isclose mean_squared_error true_values predicted_values 0 028 true true_labels np array 1 0 2 0 3 0 4 0 5 0 predicted_probs np array 0 3 0 8 0 9 0 2 mean_squared_error true_labels predicted_probs traceback most recent call last valueerror input arrays must have the same length calculates the mean absolute error mae between ground truth observed and predicted values mae measures the absolute difference between true values and predicted values equation mae 1 n σ abs y_true y_pred reference https en wikipedia org wiki mean_absolute_error parameters y_true the true values ground truth y_pred the predicted values true_values np array 1 0 2 0 3 0 4 0 5 0 predicted_values np array 0 8 2 1 2 9 4 2 5 2 np isclose mean_absolute_error true_values predicted_values 0 16 true true_values np array 1 0 2 0 3 0 4 0 5 0 predicted_values np array 0 8 2 1 2 9 4 2 5 2 np isclose mean_absolute_error true_values predicted_values 2 16 false true_labels np array 1 0 2 0 3 0 4 0 5 0 predicted_probs np array 0 3 0 8 0 9 5 2 mean_absolute_error true_labels predicted_probs traceback most recent call last valueerror input arrays must have the same length calculate the mean squared logarithmic error msle between ground truth and predicted values msle measures the squared logarithmic difference between true values and predicted values for regression models it s particularly useful for dealing with skewed or large value data and it s often used when the relative differences between predicted and true values are more important than absolute differences msle 1 n σ log 1 y_true log 1 y_pred 2 reference https insideaiml com blog meansquared logarithmic error loss 1035 parameters y_true the true values ground truth y_pred the predicted values true_values np array 1 0 2 0 3 0 4 0 5 0 predicted_values np array 0 8 2 1 2 9 4 2 5 2 mean_squared_logarithmic_error true_values predicted_values 0 0030860877925181344 true_labels np array 1 0 2 0 3 0 4 0 5 0 predicted_probs np array 0 3 0 8 0 9 0 2 mean_squared_logarithmic_error true_labels predicted_probs traceback most recent call last valueerror input arrays must have the same length calculate the mean absolute percentage error between y_true and y_pred mean absolute percentage error calculates the average of the absolute percentage differences between the predicted and true values formula σ y_true i y_pred i y_true i n source https stephenallwright com good mape score parameters y_true np ndarray numpy array containing true target values y_pred np ndarray numpy array containing predicted values returns float the mean absolute percentage error between y_true and y_pred examples y_true np array 10 20 30 40 y_pred np array 12 18 33 45 mean_absolute_percentage_error y_true y_pred 0 13125 y_true np array 1 2 3 4 y_pred np array 2 3 4 5 mean_absolute_percentage_error y_true y_pred 0 5208333333333333 y_true np array 34 37 44 47 48 48 46 43 32 27 26 24 y_pred np array 37 40 46 44 46 50 45 44 34 30 22 23 mean_absolute_percentage_error y_true y_pred 0 064671076436071 calculate the perplexity for the y_true and y_pred compute the perplexity which useful in predicting language model accuracy in natural language processing nlp perplexity is measure of how certain the model in its predictions perplexity loss exp 1 n σ ln p x reference https en wikipedia org wiki perplexity args y_true actual label encoded sentences of shape batch_size sentence_length y_pred predicted sentences of shape batch_size sentence_length vocab_size epsilon small floating point number to avoid getting inf for log 0 returns perplexity loss between y_true and y_pred y_true np array 1 4 2 3 y_pred np array 0 28 0 19 0 21 0 15 0 15 0 24 0 19 0 09 0 18 0 27 0 03 0 26 0 21 0 18 0 30 0 28 0 10 0 33 0 15 0 12 perplexity_loss y_true y_pred 5 0247347775367945 y_true np array 1 4 2 3 y_pred np array 0 28 0 19 0 21 0 15 0 15 0 24 0 19 0 09 0 18 0 27 0 30 0 10 0 20 0 15 0 25 0 03 0 26 0 21 0 18 0 30 0 28 0 10 0 33 0 15 0 12 0 30 0 10 0 20 0 15 0 25 perplexity_loss y_true y_pred traceback most recent call last valueerror sentence length of y_true and y_pred must be equal y_true np array 1 4 2 11 y_pred np array 0 28 0 19 0 21 0 15 0 15 0 24 0 19 0 09 0 18 0 27 0 03 0 26 0 21 0 18 0 30 0 28 0 10 0 33 0 15 0 12 perplexity_loss y_true y_pred traceback most recent call last valueerror label value must not be greater than vocabulary size y_true np array 1 4 y_pred np array 0 28 0 19 0 21 0 15 0 15 0 24 0 19 0 09 0 18 0 27 0 03 0 26 0 21 0 18 0 30 0 28 0 10 0 33 0 15 0 12 perplexity_loss y_true y_pred traceback most recent call last valueerror batch size of y_true and y_pred must be equal matrix to select prediction value only for true class getting the matrix containing prediction for only true class calculating perplexity for each sentence
import numpy as np def binary_cross_entropy( y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-15 ) -> float: if len(y_true) != len(y_pred): raise ValueError("Input arrays must have the same length.") y_pred = np.clip(y_pred, epsilon, 1 - epsilon) bce_loss = -(y_true * np.log(y_pred) + (1 - y_true) * np.log(1 - y_pred)) return np.mean(bce_loss) def binary_focal_cross_entropy( y_true: np.ndarray, y_pred: np.ndarray, gamma: float = 2.0, alpha: float = 0.25, epsilon: float = 1e-15, ) -> float: if len(y_true) != len(y_pred): raise ValueError("Input arrays must have the same length.") y_pred = np.clip(y_pred, epsilon, 1 - epsilon) bcfe_loss = -( alpha * (1 - y_pred) ** gamma * y_true * np.log(y_pred) + (1 - alpha) * y_pred**gamma * (1 - y_true) * np.log(1 - y_pred) ) return np.mean(bcfe_loss) def categorical_cross_entropy( y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-15 ) -> float: if y_true.shape != y_pred.shape: raise ValueError("Input arrays must have the same shape.") if np.any((y_true != 0) & (y_true != 1)) or np.any(y_true.sum(axis=1) != 1): raise ValueError("y_true must be one-hot encoded.") if not np.all(np.isclose(np.sum(y_pred, axis=1), 1, rtol=epsilon, atol=epsilon)): raise ValueError("Predicted probabilities must sum to approximately 1.") y_pred = np.clip(y_pred, epsilon, 1) return -np.sum(y_true * np.log(y_pred)) def hinge_loss(y_true: np.ndarray, y_pred: np.ndarray) -> float: if len(y_true) != len(y_pred): raise ValueError("Length of predicted and actual array must be same.") if np.any((y_true != -1) & (y_true != 1)): raise ValueError("y_true can have values -1 or 1 only.") hinge_losses = np.maximum(0, 1.0 - (y_true * y_pred)) return np.mean(hinge_losses) def huber_loss(y_true: np.ndarray, y_pred: np.ndarray, delta: float) -> float: if len(y_true) != len(y_pred): raise ValueError("Input arrays must have the same length.") huber_mse = 0.5 * (y_true - y_pred) ** 2 huber_mae = delta * (np.abs(y_true - y_pred) - 0.5 * delta) return np.where(np.abs(y_true - y_pred) <= delta, huber_mse, huber_mae).mean() def mean_squared_error(y_true: np.ndarray, y_pred: np.ndarray) -> float: if len(y_true) != len(y_pred): raise ValueError("Input arrays must have the same length.") squared_errors = (y_true - y_pred) ** 2 return np.mean(squared_errors) def mean_absolute_error(y_true: np.ndarray, y_pred: np.ndarray) -> float: if len(y_true) != len(y_pred): raise ValueError("Input arrays must have the same length.") return np.mean(abs(y_true - y_pred)) def mean_squared_logarithmic_error(y_true: np.ndarray, y_pred: np.ndarray) -> float: if len(y_true) != len(y_pred): raise ValueError("Input arrays must have the same length.") squared_logarithmic_errors = (np.log1p(y_true) - np.log1p(y_pred)) ** 2 return np.mean(squared_logarithmic_errors) def mean_absolute_percentage_error( y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-15 ) -> float: if len(y_true) != len(y_pred): raise ValueError("The length of the two arrays should be the same.") y_true = np.where(y_true == 0, epsilon, y_true) absolute_percentage_diff = np.abs((y_true - y_pred) / y_true) return np.mean(absolute_percentage_diff) def perplexity_loss( y_true: np.ndarray, y_pred: np.ndarray, epsilon: float = 1e-7 ) -> float: vocab_size = y_pred.shape[2] if y_true.shape[0] != y_pred.shape[0]: raise ValueError("Batch size of y_true and y_pred must be equal.") if y_true.shape[1] != y_pred.shape[1]: raise ValueError("Sentence length of y_true and y_pred must be equal.") if np.max(y_true) > vocab_size: raise ValueError("Label value must not be greater than vocabulary size.") filter_matrix = np.array( [[list(np.eye(vocab_size)[word]) for word in sentence] for sentence in y_true] ) true_class_pred = np.sum(y_pred * filter_matrix, axis=2).clip(epsilon, 1) perp_losses = np.exp(np.negative(np.mean(np.log(true_class_pred), axis=1))) return np.mean(perp_losses) if __name__ == "__main__": import doctest doctest.testmod()
mel frequency cepstral coefficients mfcc calculation mfcc is an algorithm widely used in audio and speech processing to represent the shortterm power spectrum of a sound signal in a more compact and discriminative way it is particularly popular in speech and audio processing tasks such as speech recognition and speaker identification how mel frequency cepstral coefficients are calculated 1 preprocessing load an audio signal and normalize it to ensure that the values fall within a specific range e g between 1 and 1 frame the audio signal into overlapping fixedlength segments typically using a technique like windowing to reduce spectral leakage 2 fourier transform apply a fast fourier transform fft to each audio frame to convert it from the time domain to the frequency domain this results in a representation of the audio frame as a sequence of frequency components 3 power spectrum calculate the power spectrum by taking the squared magnitude of each frequency component obtained from the fft this step measures the energy distribution across different frequency bands 4 mel filterbank apply a set of triangular filterbanks spaced in the mel frequency scale to the power spectrum these filters mimic the human auditory system s frequency response each filterbank sums the power spectrum values within its band 5 logarithmic compression take the logarithm typically base 10 of the filterbank values to compress the dynamic range this step mimics the logarithmic response of the human ear to sound intensity 6 discrete cosine transform dct apply the discrete cosine transform to the log filterbank energies to obtain the mfcc coefficients this transformation helps decorrelate the filterbank energies and captures the most important features of the audio signal 7 feature extraction select a subset of the dct coefficients to form the feature vector often the first few coefficients e g 1213 are used for most applications references melfrequency cepstral coefficients mfccs https en wikipedia orgwikimelfrequencycepstrum speech and language processing by daniel jurafsky james h martin https web stanford edujurafskyslp3 mel frequency cepstral coefficient mfcc tutorial http practicalcryptography commiscellaneousmachinelearning guidemelfrequencycepstralcoefficientsmfccs amir lavasani calculate mel frequency cepstral coefficients mfccs from an audio signal args audio the input audio signal samplerate the sample rate of the audio signal in hz fttsize the size of the fft window default is 1024 hoplength the hop length for frame creation default is 20ms melfilternum the number of mel filters default is 10 dctfilternum the number of dct filters default is 40 returns a matrix of mfccs for the input audio raises valueerror if the input audio is empty example samplerate 44100 sample rate of 44 1 khz duration 2 0 duration of 1 second t np linspace0 duration intsamplerate duration endpointfalse audio 0 5 np sin2 np pi 440 0 t generate a 440 hz sine wave mfccs mfccaudio samplerate mfccs shape 40 101 normalize audio frame audio into convert to frequency domain for simplicity we will choose the hanning window normalize an audio signal by scaling it to have values between 1 and 1 args audio the input audio signal returns the normalized audio signal examples audio np array1 2 3 4 5 normalizedaudio normalizeaudio np maxnormalizedaudio 1 0 np minnormalizedaudio 0 2 divide the entire audio signal by the maximum absolute value split an audio signal into overlapping frames args audio the input audio signal samplerate the sample rate of the audio signal hoplength the length of the hopping default is 20ms fttsize the size of the fft window default is 1024 returns an array of overlapping frames examples audio np array1 2 3 4 5 6 7 8 9 101000 samplerate 8000 frames audioframesaudio samplerate hoplength10 fttsize512 frames shape 126 512 pad the audio signal to handle edge cases calculate the number of frames initialize an array to store the frames split the audio signal into frames calculate the fast fourier transform fft of windowed audio data args audiowindowed the windowed audio signal fttsize the size of the fft default is 1024 returns the fft of the audio data examples audiowindowed np array1 0 2 0 3 0 4 0 5 0 6 0 audiofft calculatefftaudiowindowed fttsize4 np allcloseaudiofft0 np array6 00 j 1 50 8660254j 1 50 8660254j true transpose the audio data to have time in rows and channels in columns initialize an array to store the fft results compute fft for each channel transpose the fft results back to the original shape calculate the power of the audio signal from its fft args audiofft the fft of the audio signal returns the power of the audio signal examples audiofft np array12j 23j 34j 45j power calculatesignalpoweraudiofft np allclosepower np array5 13 25 41 true calculate the power by squaring the absolute values of the fft coefficients convert a frequency in hertz to the mel scale args freq the frequency in hertz returns the frequency in mel scale examples roundfreqtomel1000 2 999 99 use the formula to convert frequency to the mel scale convert a frequency in the mel scale to hertz args mels the frequency in mel scale returns the frequency in hertz examples roundmeltofreq999 99 2 1000 01 use the formula to convert mel scale to frequency create a melspaced filter bank for audio processing args samplerate the sample rate of the audio melfilternum the number of mel filters default is 10 fttsize the size of the fft default is 1024 returns melspaced filter bank examples roundmelspacedfilterbank8000 10 102401 10 0 0004603981 calculate filter points and mel frequencies normalize filters taken from the librosa library generate filters for audio processing args filterpoints a list of filter points fttsize the size of the fft returns a matrix of filters examples getfiltersnp array0 20 51 95 161 256 dtypeint 512 shape 4 257 linearly increase values from 0 to 1 linearly decrease values from 1 to 0 calculate the filter points and frequencies for mel frequency filters args samplerate the sample rate of the audio freqmin the minimum frequency in hertz freqhigh the maximum frequency in hertz melfilternum the number of mel filters default is 10 fttsize the size of the fft default is 1024 returns filter points and corresponding frequencies examples filterpoints getfilterpoints8000 0 4000 melfilternum4 fttsize512 filterpoints0 array 0 20 51 95 161 256 filterpoints1 array 0 324 46707094 799 33254207 1494 30973963 2511 42581671 4000 convert minimum and maximum frequencies to mel scale generate equally spaced mel frequencies convert mel frequencies back to hertz calculate filter points as integer values compute the discrete cosine transform dct basis matrix args dctfilternum the number of dct filters to generate filternum the number of the fbank filters returns the dct basis matrix examples rounddiscretecosinetransform3 500 5 0 44721 example function to calculate mel frequency cepstral coefficients mfccs from an audio file args wavfilepath the path to the wav audio file returns np ndarray the computed mfccs for the audio load the audio from the wav file calculate mfccs calculate mel frequency cepstral coefficients mfccs from an audio signal args audio the input audio signal sample_rate the sample rate of the audio signal in hz ftt_size the size of the fft window default is 1024 hop_length the hop length for frame creation default is 20ms mel_filter_num the number of mel filters default is 10 dct_filter_num the number of dct filters default is 40 returns a matrix of mfccs for the input audio raises valueerror if the input audio is empty example sample_rate 44100 sample rate of 44 1 khz duration 2 0 duration of 1 second t np linspace 0 duration int sample_rate duration endpoint false audio 0 5 np sin 2 np pi 440 0 t generate a 440 hz sine wave mfccs mfcc audio sample_rate mfccs shape 40 101 normalize audio frame audio into convert to frequency domain for simplicity we will choose the hanning window normalize an audio signal by scaling it to have values between 1 and 1 args audio the input audio signal returns the normalized audio signal examples audio np array 1 2 3 4 5 normalized_audio normalize audio np max normalized_audio 1 0 np min normalized_audio 0 2 divide the entire audio signal by the maximum absolute value split an audio signal into overlapping frames args audio the input audio signal sample_rate the sample rate of the audio signal hop_length the length of the hopping default is 20ms ftt_size the size of the fft window default is 1024 returns an array of overlapping frames examples audio np array 1 2 3 4 5 6 7 8 9 10 1000 sample_rate 8000 frames audio_frames audio sample_rate hop_length 10 ftt_size 512 frames shape 126 512 pad the audio signal to handle edge cases calculate the number of frames initialize an array to store the frames split the audio signal into frames calculate the fast fourier transform fft of windowed audio data args audio_windowed the windowed audio signal ftt_size the size of the fft default is 1024 returns the fft of the audio data examples audio_windowed np array 1 0 2 0 3 0 4 0 5 0 6 0 audio_fft calculate_fft audio_windowed ftt_size 4 np allclose audio_fft 0 np array 6 0 0 j 1 5 0 8660254j 1 5 0 8660254j true transpose the audio data to have time in rows and channels in columns initialize an array to store the fft results compute fft for each channel transpose the fft results back to the original shape calculate the power of the audio signal from its fft args audio_fft the fft of the audio signal returns the power of the audio signal examples audio_fft np array 1 2j 2 3j 3 4j 4 5j power calculate_signal_power audio_fft np allclose power np array 5 13 25 41 true calculate the power by squaring the absolute values of the fft coefficients convert a frequency in hertz to the mel scale args freq the frequency in hertz returns the frequency in mel scale examples round freq_to_mel 1000 2 999 99 use the formula to convert frequency to the mel scale convert a frequency in the mel scale to hertz args mels the frequency in mel scale returns the frequency in hertz examples round mel_to_freq 999 99 2 1000 01 use the formula to convert mel scale to frequency create a mel spaced filter bank for audio processing args sample_rate the sample rate of the audio mel_filter_num the number of mel filters default is 10 ftt_size the size of the fft default is 1024 returns mel spaced filter bank examples round mel_spaced_filterbank 8000 10 1024 0 1 10 0 0004603981 calculate filter points and mel frequencies normalize filters taken from the librosa library generate filters for audio processing args filter_points a list of filter points ftt_size the size of the fft returns a matrix of filters examples get_filters np array 0 20 51 95 161 256 dtype int 512 shape 4 257 linearly increase values from 0 to 1 linearly decrease values from 1 to 0 calculate the filter points and frequencies for mel frequency filters args sample_rate the sample rate of the audio freq_min the minimum frequency in hertz freq_high the maximum frequency in hertz mel_filter_num the number of mel filters default is 10 ftt_size the size of the fft default is 1024 returns filter points and corresponding frequencies examples filter_points get_filter_points 8000 0 4000 mel_filter_num 4 ftt_size 512 filter_points 0 array 0 20 51 95 161 256 filter_points 1 array 0 324 46707094 799 33254207 1494 30973963 2511 42581671 4000 convert minimum and maximum frequencies to mel scale generate equally spaced mel frequencies convert mel frequencies back to hertz calculate filter points as integer values compute the discrete cosine transform dct basis matrix args dct_filter_num the number of dct filters to generate filter_num the number of the fbank filters returns the dct basis matrix examples round discrete_cosine_transform 3 5 0 0 5 0 44721 example function to calculate mel frequency cepstral coefficients mfccs from an audio file args wav_file_path the path to the wav audio file returns np ndarray the computed mfccs for the audio load the audio from the wav file calculate mfccs
import logging import numpy as np import scipy.fftpack as fft from scipy.signal import get_window logging.basicConfig(filename=f"{__file__}.log", level=logging.INFO) def mfcc( audio: np.ndarray, sample_rate: int, ftt_size: int = 1024, hop_length: int = 20, mel_filter_num: int = 10, dct_filter_num: int = 40, ) -> np.ndarray: logging.info(f"Sample rate: {sample_rate}Hz") logging.info(f"Audio duration: {len(audio) / sample_rate}s") logging.info(f"Audio min: {np.min(audio)}") logging.info(f"Audio max: {np.max(audio)}") audio_normalized = normalize(audio) logging.info(f"Normalized audio min: {np.min(audio_normalized)}") logging.info(f"Normalized audio max: {np.max(audio_normalized)}") audio_framed = audio_frames( audio_normalized, sample_rate, ftt_size=ftt_size, hop_length=hop_length ) logging.info(f"Framed audio shape: {audio_framed.shape}") logging.info(f"First frame: {audio_framed[0]}") window = get_window("hann", ftt_size, fftbins=True) audio_windowed = audio_framed * window logging.info(f"Windowed audio shape: {audio_windowed.shape}") logging.info(f"First frame: {audio_windowed[0]}") audio_fft = calculate_fft(audio_windowed, ftt_size) logging.info(f"fft audio shape: {audio_fft.shape}") logging.info(f"First frame: {audio_fft[0]}") audio_power = calculate_signal_power(audio_fft) logging.info(f"power audio shape: {audio_power.shape}") logging.info(f"First frame: {audio_power[0]}") filters = mel_spaced_filterbank(sample_rate, mel_filter_num, ftt_size) logging.info(f"filters shape: {filters.shape}") audio_filtered = np.dot(filters, np.transpose(audio_power)) audio_log = 10.0 * np.log10(audio_filtered) logging.info(f"audio_log shape: {audio_log.shape}") dct_filters = discrete_cosine_transform(dct_filter_num, mel_filter_num) cepstral_coefficents = np.dot(dct_filters, audio_log) logging.info(f"cepstral_coefficents shape: {cepstral_coefficents.shape}") return cepstral_coefficents def normalize(audio: np.ndarray) -> np.ndarray: return audio / np.max(np.abs(audio)) def audio_frames( audio: np.ndarray, sample_rate: int, hop_length: int = 20, ftt_size: int = 1024, ) -> np.ndarray: hop_size = np.round(sample_rate * hop_length / 1000).astype(int) audio = np.pad(audio, int(ftt_size / 2), mode="reflect") frame_count = int((len(audio) - ftt_size) / hop_size) + 1 frames = np.zeros((frame_count, ftt_size)) for n in range(frame_count): frames[n] = audio[n * hop_size : n * hop_size + ftt_size] return frames def calculate_fft(audio_windowed: np.ndarray, ftt_size: int = 1024) -> np.ndarray: audio_transposed = np.transpose(audio_windowed) audio_fft = np.empty( (int(1 + ftt_size // 2), audio_transposed.shape[1]), dtype=np.complex64, order="F", ) for n in range(audio_fft.shape[1]): audio_fft[:, n] = fft.fft(audio_transposed[:, n], axis=0)[: audio_fft.shape[0]] return np.transpose(audio_fft) def calculate_signal_power(audio_fft: np.ndarray) -> np.ndarray: return np.square(np.abs(audio_fft)) def freq_to_mel(freq: float) -> float: return 2595.0 * np.log10(1.0 + freq / 700.0) def mel_to_freq(mels: float) -> float: return 700.0 * (10.0 ** (mels / 2595.0) - 1.0) def mel_spaced_filterbank( sample_rate: int, mel_filter_num: int = 10, ftt_size: int = 1024 ) -> np.ndarray: freq_min = 0 freq_high = sample_rate // 2 logging.info(f"Minimum frequency: {freq_min}") logging.info(f"Maximum frequency: {freq_high}") filter_points, mel_freqs = get_filter_points( sample_rate, freq_min, freq_high, mel_filter_num, ftt_size, ) filters = get_filters(filter_points, ftt_size) enorm = 2.0 / (mel_freqs[2 : mel_filter_num + 2] - mel_freqs[:mel_filter_num]) return filters * enorm[:, np.newaxis] def get_filters(filter_points: np.ndarray, ftt_size: int) -> np.ndarray: num_filters = len(filter_points) - 2 filters = np.zeros((num_filters, int(ftt_size / 2) + 1)) for n in range(num_filters): start = filter_points[n] mid = filter_points[n + 1] end = filter_points[n + 2] filters[n, start:mid] = np.linspace(0, 1, mid - start) filters[n, mid:end] = np.linspace(1, 0, end - mid) return filters def get_filter_points( sample_rate: int, freq_min: int, freq_high: int, mel_filter_num: int = 10, ftt_size: int = 1024, ) -> tuple[np.ndarray, np.ndarray]: fmin_mel = freq_to_mel(freq_min) fmax_mel = freq_to_mel(freq_high) logging.info(f"MEL min: {fmin_mel}") logging.info(f"MEL max: {fmax_mel}") mels = np.linspace(fmin_mel, fmax_mel, num=mel_filter_num + 2) freqs = mel_to_freq(mels) filter_points = np.floor((ftt_size + 1) / sample_rate * freqs).astype(int) return filter_points, freqs def discrete_cosine_transform(dct_filter_num: int, filter_num: int) -> np.ndarray: basis = np.empty((dct_filter_num, filter_num)) basis[0, :] = 1.0 / np.sqrt(filter_num) samples = np.arange(1, 2 * filter_num, 2) * np.pi / (2.0 * filter_num) for i in range(1, dct_filter_num): basis[i, :] = np.cos(i * samples) * np.sqrt(2.0 / filter_num) return basis def example(wav_file_path: str = "./path-to-file/sample.wav") -> np.ndarray: from scipy.io import wavfile sample_rate, audio = wavfile.read(wav_file_path) return mfcc(audio, sample_rate) if __name__ == "__main__": import doctest doctest.testmod()
wrappery 0 0 1 wrapper y 0 0 1
from sklearn.neural_network import MLPClassifier X = [[0.0, 0.0], [1.0, 1.0], [1.0, 0.0], [0.0, 1.0]] y = [0, 1, 0, 0] clf = MLPClassifier( solver="lbfgs", alpha=1e-5, hidden_layer_sizes=(5, 2), random_state=1 ) clf.fit(X, y) test = [[0.0, 0.0], [0.0, 1.0], [1.0, 1.0]] Y = clf.predict(test) def wrapper(y): return list(y) if __name__ == "__main__": import doctest doctest.testmod()
polynomial regression is a type of regression analysis that models the relationship between a predictor x and the response y as an mthdegree polynomial y x x x by treating x x x as distinct variables we see that polynomial regression is a special case of multiple linear regression therefore we can use ordinary least squares ols estimation to estimate the vector of model parameters for polynomial regression xxxy xy where x is the design matrix y is the response vector and x denotes the moorepenrose pseudoinverse of x in the case of polynomial regression the design matrix is 1 x x x x 1 x x x 1 x x x in ols estimation inverting xx to compute x can be very numerically unstable this implementation sidesteps this need to invert xx by computing x using singular value decomposition svd vuy where uv is an svd of x references https en wikipedia orgwikipolynomialregression https en wikipedia orgwikimooree28093penroseinverse https en wikipedia orgwikinumericalmethodsforlinearleastsquares https en wikipedia orgwikisingularvaluedecomposition raises valueerror if the polynomial degree is negative constructs a polynomial regression design matrix for the given input data for input data x x x x and polynomial degree m the design matrix is the vandermonde matrix 1 x x x x 1 x x x 1 x x x reference https en wikipedia orgwikivandermondematrix param data the input predictor values x either for model fitting or for prediction param degree the polynomial degree m returns the vandermonde matrix x see above raises valueerror if input data is not n x 1 x np array0 1 2 polynomialregression designmatrixx degree0 array1 1 1 polynomialregression designmatrixx degree1 array1 0 1 1 1 2 polynomialregression designmatrixx degree2 array1 0 0 1 1 1 1 2 4 polynomialregression designmatrixx degree3 array1 0 0 0 1 1 1 1 1 2 4 8 polynomialregression designmatrixnp array0 0 0 0 degree3 traceback most recent call last valueerror data must have dimensions n x 1 computes the polynomial regression model parameters using ordinary least squares ols estimation xxxy xy where x denotes the moorepenrose pseudoinverse of the design matrix x this function computes x using singular value decomposition svd references https en wikipedia orgwikimooree28093penroseinverse https en wikipedia orgwikisingularvaluedecomposition https en wikipedia orgwikimulticollinearity param xtrain the predictor values x for model fitting param ytrain the response values y for model fitting raises arithmeticerror if x isn t full rank then xx is singular and doesn t exist x np array0 1 2 3 4 5 6 7 8 9 10 y x3 2 x2 3 x 5 polyreg polynomialregressiondegree3 polyreg fitx y polyreg params array5 3 2 1 polyreg polynomialregressiondegree20 polyreg fitx y traceback most recent call last arithmeticerror design matrix is not full rank can t compute coefficients make sure errors don t grow too large coefs np array250 50 2 36 20 12 10 2 1 15 1 y polynomialregression designmatrixx lencoefs 1 coefs polyreg polynomialregressiondegreelencoefs 1 polyreg fitx y np allclosepolyreg params coefs atol10e3 true np linalg pinv computes the moorepenrose pseudoinverse using svd computes the predicted response values y for the given input data by constructing the design matrix x and evaluating y x param data the predictor values x for prediction returns the predicted response values y x raises arithmeticerror if this function is called before the model parameters are fit x np array0 1 2 3 4 y x3 2 x2 3 x 5 polyreg polynomialregressiondegree3 polyreg fitx y polyreg predictnp array1 array11 polyreg predictnp array2 array27 polyreg predictnp array6 array157 polynomialregressiondegree3 predictx traceback most recent call last arithmeticerror predictor hasn t been fit yet fit a polynomial regression model to predict fuel efficiency using seaborn s mpg dataset pass placeholder function is only for demo purposes raises valueerror if the polynomial degree is negative constructs a polynomial regression design matrix for the given input data for input data x x₁ x₂ xₙ and polynomial degree m the design matrix is the vandermonde matrix 1 x₁ x₁² x₁ᵐ x 1 x₂ x₂² x₂ᵐ 1 xₙ xₙ² xₙᵐ reference https en wikipedia org wiki vandermonde_matrix param data the input predictor values x either for model fitting or for prediction param degree the polynomial degree m returns the vandermonde matrix x see above raises valueerror if input data is not n x 1 x np array 0 1 2 polynomialregression _design_matrix x degree 0 array 1 1 1 polynomialregression _design_matrix x degree 1 array 1 0 1 1 1 2 polynomialregression _design_matrix x degree 2 array 1 0 0 1 1 1 1 2 4 polynomialregression _design_matrix x degree 3 array 1 0 0 0 1 1 1 1 1 2 4 8 polynomialregression _design_matrix np array 0 0 0 0 degree 3 traceback most recent call last valueerror data must have dimensions n x 1 computes the polynomial regression model parameters using ordinary least squares ols estimation β xᵀx ¹xᵀy x y where x denotes the moore penrose pseudoinverse of the design matrix x this function computes x using singular value decomposition svd references https en wikipedia org wiki moore e2 80 93penrose_inverse https en wikipedia org wiki singular_value_decomposition https en wikipedia org wiki multicollinearity param x_train the predictor values x for model fitting param y_train the response values y for model fitting raises arithmeticerror if x isn t full rank then xᵀx is singular and β doesn t exist x np array 0 1 2 3 4 5 6 7 8 9 10 y x 3 2 x 2 3 x 5 poly_reg polynomialregression degree 3 poly_reg fit x y poly_reg params array 5 3 2 1 poly_reg polynomialregression degree 20 poly_reg fit x y traceback most recent call last arithmeticerror design matrix is not full rank can t compute coefficients make sure errors don t grow too large coefs np array 250 50 2 36 20 12 10 2 1 15 1 y polynomialregression _design_matrix x len coefs 1 coefs poly_reg polynomialregression degree len coefs 1 poly_reg fit x y np allclose poly_reg params coefs atol 10e 3 true noqa n806 np linalg pinv computes the moore penrose pseudoinverse using svd computes the predicted response values y for the given input data by constructing the design matrix x and evaluating y xβ param data the predictor values x for prediction returns the predicted response values y xβ raises arithmeticerror if this function is called before the model parameters are fit x np array 0 1 2 3 4 y x 3 2 x 2 3 x 5 poly_reg polynomialregression degree 3 poly_reg fit x y poly_reg predict np array 1 array 11 poly_reg predict np array 2 array 27 poly_reg predict np array 6 array 157 polynomialregression degree 3 predict x traceback most recent call last arithmeticerror predictor hasn t been fit yet fit a polynomial regression model to predict fuel efficiency using seaborn s mpg dataset pass placeholder function is only for demo purposes
import matplotlib.pyplot as plt import numpy as np class PolynomialRegression: __slots__ = "degree", "params" def __init__(self, degree: int) -> None: if degree < 0: raise ValueError("Polynomial degree must be non-negative") self.degree = degree self.params = None @staticmethod def _design_matrix(data: np.ndarray, degree: int) -> np.ndarray: rows, *remaining = data.shape if remaining: raise ValueError("Data must have dimensions N x 1") return np.vander(data, N=degree + 1, increasing=True) def fit(self, x_train: np.ndarray, y_train: np.ndarray) -> None: X = PolynomialRegression._design_matrix(x_train, self.degree) _, cols = X.shape if np.linalg.matrix_rank(X) < cols: raise ArithmeticError( "Design matrix is not full rank, can't compute coefficients" ) self.params = np.linalg.pinv(X) @ y_train def predict(self, data: np.ndarray) -> np.ndarray: if self.params is None: raise ArithmeticError("Predictor hasn't been fit yet") return PolynomialRegression._design_matrix(data, self.degree) @ self.params def main() -> None: import seaborn as sns mpg_data = sns.load_dataset("mpg") poly_reg = PolynomialRegression(degree=2) poly_reg.fit(mpg_data.weight, mpg_data.mpg) weight_sorted = np.sort(mpg_data.weight) predictions = poly_reg.predict(weight_sorted) plt.scatter(mpg_data.weight, mpg_data.mpg, color="gray", alpha=0.5) plt.plot(weight_sorted, predictions, color="red", linewidth=3) plt.title("Predicting Fuel Efficiency Using Polynomial Regression") plt.xlabel("Weight (lbs)") plt.ylabel("Fuel Efficiency (mpg)") plt.show() if __name__ == "__main__": import doctest doctest.testmod() main()
here i implemented the scoring functions mae mse rmse rmsle are included those are used for calculating differences between predicted values and actual values metrics are slightly differentiated sometimes squared rooted even log is used using log and roots can be perceived as tools for penalizing big errors however using appropriate metrics depends on the situations and types of data mean absolute error examplesrounded for precision actual 1 2 3 predict 1 4 3 np aroundmaepredict actual decimals 2 0 67 actual 1 1 1 predict 1 1 1 maepredict actual 0 0 mean squared error examplesrounded for precision actual 1 2 3 predict 1 4 3 np aroundmsepredict actual decimals 2 1 33 actual 1 1 1 predict 1 1 1 msepredict actual 0 0 root mean squared error examplesrounded for precision actual 1 2 3 predict 1 4 3 np aroundrmsepredict actual decimals 2 1 15 actual 1 1 1 predict 1 1 1 rmsepredict actual 0 0 root mean square logarithmic error examplesrounded for precision actual 10 10 30 predict 10 2 30 np aroundrmslepredict actual decimals 2 0 75 actual 1 1 1 predict 1 1 1 rmslepredict actual 0 0 mean bias deviation this value is negative if the model underpredicts positive if it overpredicts examplerounded for precision here the model overpredicts actual 1 2 3 predict 2 3 4 np aroundmbdpredict actual decimals 2 50 0 here the model underpredicts actual 1 2 3 predict 0 1 1 np aroundmbdpredict actual decimals 2 66 67 printnumerator denumerator here i implemented the scoring functions mae mse rmse rmsle are included those are used for calculating differences between predicted values and actual values metrics are slightly differentiated sometimes squared rooted even log is used using log and roots can be perceived as tools for penalizing big errors however using appropriate metrics depends on the situations and types of data mean absolute error examples rounded for precision actual 1 2 3 predict 1 4 3 np around mae predict actual decimals 2 0 67 actual 1 1 1 predict 1 1 1 mae predict actual 0 0 mean squared error examples rounded for precision actual 1 2 3 predict 1 4 3 np around mse predict actual decimals 2 1 33 actual 1 1 1 predict 1 1 1 mse predict actual 0 0 root mean squared error examples rounded for precision actual 1 2 3 predict 1 4 3 np around rmse predict actual decimals 2 1 15 actual 1 1 1 predict 1 1 1 rmse predict actual 0 0 root mean square logarithmic error examples rounded for precision actual 10 10 30 predict 10 2 30 np around rmsle predict actual decimals 2 0 75 actual 1 1 1 predict 1 1 1 rmsle predict actual 0 0 mean bias deviation this value is negative if the model underpredicts positive if it overpredicts example rounded for precision here the model overpredicts actual 1 2 3 predict 2 3 4 np around mbd predict actual decimals 2 50 0 here the model underpredicts actual 1 2 3 predict 0 1 1 np around mbd predict actual decimals 2 66 67 print numerator denumerator
import numpy as np def mae(predict, actual): predict = np.array(predict) actual = np.array(actual) difference = abs(predict - actual) score = difference.mean() return score def mse(predict, actual): predict = np.array(predict) actual = np.array(actual) difference = predict - actual square_diff = np.square(difference) score = square_diff.mean() return score def rmse(predict, actual): predict = np.array(predict) actual = np.array(actual) difference = predict - actual square_diff = np.square(difference) mean_square_diff = square_diff.mean() score = np.sqrt(mean_square_diff) return score def rmsle(predict, actual): predict = np.array(predict) actual = np.array(actual) log_predict = np.log(predict + 1) log_actual = np.log(actual + 1) difference = log_predict - log_actual square_diff = np.square(difference) mean_square_diff = square_diff.mean() score = np.sqrt(mean_square_diff) return score def mbd(predict, actual): predict = np.array(predict) actual = np.array(actual) difference = predict - actual numerator = np.sum(difference) / len(predict) denumerator = np.sum(actual) / len(predict) score = float(numerator) / denumerator * 100 return score def manual_accuracy(predict, actual): return np.mean(np.array(actual) == np.array(predict))
https en wikipedia orgwikiselforganizingmap compute the winning vector by euclidean distance selforganizingmap getwinner1 2 3 4 5 6 1 2 3 1 update the winning vector selforganizingmap update1 2 3 4 5 6 1 2 3 1 0 1 1 2 3 3 7 4 7 6 driver code training examples m n weight initialization n c training training sample compute the winning vector update the winning vector classify test sample results running the main function compute the winning vector by euclidean distance selforganizingmap get_winner 1 2 3 4 5 6 1 2 3 1 update the winning vector selforganizingmap update 1 2 3 4 5 6 1 2 3 1 0 1 1 2 3 3 7 4 7 6 driver code training examples m n weight initialization n c training training sample compute the winning vector update the winning vector classify test sample results running the main function
import math class SelfOrganizingMap: def get_winner(self, weights: list[list[float]], sample: list[int]) -> int: d0 = 0.0 d1 = 0.0 for i in range(len(sample)): d0 += math.pow((sample[i] - weights[0][i]), 2) d1 += math.pow((sample[i] - weights[1][i]), 2) return 0 if d0 > d1 else 1 return 0 def update( self, weights: list[list[int | float]], sample: list[int], j: int, alpha: float ) -> list[list[int | float]]: for i in range(len(weights)): weights[j][i] += alpha * (sample[i] - weights[j][i]) return weights def main() -> None: training_samples = [[1, 1, 0, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1]] weights = [[0.2, 0.6, 0.5, 0.9], [0.8, 0.4, 0.7, 0.3]] self_organizing_map = SelfOrganizingMap() epochs = 3 alpha = 0.5 for _ in range(epochs): for j in range(len(training_samples)): sample = training_samples[j] winner = self_organizing_map.get_winner(weights, sample) weights = self_organizing_map.update(weights, sample, winner, alpha) sample = [0, 0, 0, 1] winner = self_organizing_map.get_winner(weights, sample) print(f"Clusters that the test sample belongs to : {winner}") print(f"Weights that have been trained : {weights}") if __name__ == "__main__": main()
implementation of sequential minimal optimization smo for support vector machines svm sequential minimal optimization smo is an algorithm for solving the quadratic programming qp problem that arises during the training of support vector machines it was invented by john platt in 1998 input 0 type numpy ndarray 1 first column of ndarray must be tags of samples must be 1 or 1 2 rows of ndarray represent samples usage command python3 sequentialminimumoptimization py code from sequentialminimumoptimization import smosvm kernel kernel kernelkernel poly degree3 coef01 gamma0 5 initalphas np zerostrain shape0 svm smosvmtraintrain alphalistinitalphas kernelfunckernel cost0 4 b0 0 tolerance0 001 svm fit predict svm predicttestsamples reference https www microsoft comenusresearchwpcontentuploads201602smobook pdf https www microsoft comenusresearchwpcontentuploads201602tr9814 pdf calculate alphas using smo algorithm 1 find alpha1 alpha2 2 calculate new alpha2 and new alpha1 3 update thresholdb 4 update error value here we only calculate those nonbound samples error if i1 or i2 is nonbound update there error value to zero predict test samples check if alpha violate kkt condition get value calculated from kernel function for test samples use kernel function for train samples kernel values have been saved in matrix get sample s error two cases 1 sampleindex is nonbound fetch error from list error 2 sampleindex is bound use predicted value deduct true value gxi yi get from error data get by gxi yi calculate kernel matrix of all possible i1 i2 saving time predict test sample s tag choose alpha1 and alpha2 choose first alpha steps 1 first loop over all sample 2 second loop over all nonbound samples till all nonbound samples does not voilate kkt condition 3 repeat this two process endlessly till all samples does not voilate kkt condition samples after first loop all sample nonbound sample choose the second alpha by using heuristic algorithm steps 1 choose alpha2 which gets the maximum step size e1 e2 2 start in a random point loop over all nonbound samples till alpha1 and alpha2 are optimized 3 start in a random point loop over all samples till alpha1 and alpha2 are optimized get the new alpha2 and new alpha1 calculate l and h which bound the new alpha2 calculate eta select the new alpha2 which could get the minimal objectives a2new has a boundary way 1 way 2 use objective function check which alpha2 new could get the minimal objectives a1new has a boundary too normalise data using minmax way 0 download dataset and load into pandas dataframe 1 preprocessing data 2 dividing data into traindata data and testdata data 3 choose kernel function and set initial alphas to zerooptional 4 calculating best alphas using smo algorithm and predict testdata samples 5 check accuracy change stdout we can not get the optimum w of our kernel svm model which is different from linear svm for this reason we generate randomly distributed points with high desity and prediced values of these points are calculated by using our trained model then we could use this prediced values to draw contour map and this contour map can represent svm s partition boundary plot contour map which represents the partition boundary plot all train samples plot support vectors calculate alphas using smo algorithm 1 find alpha1 alpha2 2 calculate new alpha2 and new alpha1 3 update threshold b 4 update error value here we only calculate those non bound samples error if i1 or i2 is non bound update there error value to zero predict test samples check if alpha violate kkt condition get value calculated from kernel function for test samples use kernel function for train samples kernel values have been saved in matrix get sample s error two cases 1 sample index is non bound fetch error from list _error 2 sample index is bound use predicted value deduct true value g xi yi get from error data get by g xi yi calculate kernel matrix of all possible i1 i2 saving time predict test sample s tag choose alpha1 and alpha2 choose first alpha steps 1 first loop over all sample 2 second loop over all non bound samples till all non bound samples does not voilate kkt condition 3 repeat this two process endlessly till all samples does not voilate kkt condition samples after first loop all sample non bound sample choose the second alpha by using heuristic algorithm steps 1 choose alpha2 which gets the maximum step size e1 e2 2 start in a random point loop over all non bound samples till alpha1 and alpha2 are optimized 3 start in a random point loop over all samples till alpha1 and alpha2 are optimized get the new alpha2 and new alpha1 calculate l and h which bound the new alpha2 calculate eta select the new alpha2 which could get the minimal objectives a2_new has a boundary way 1 way 2 use objective function check which alpha2 new could get the minimal objectives a1_new has a boundary too normalise data using min_max way 0 download dataset and load into pandas dataframe noqa s310 noqa s310 1 pre processing data 2 dividing data into train_data data and test_data data 3 choose kernel function and set initial alphas to zero optional 4 calculating best alphas using smo algorithm and predict test_data samples 5 check accuracy change stdout we can not get the optimum w of our kernel svm model which is different from linear svm for this reason we generate randomly distributed points with high desity and prediced values of these points are calculated by using our trained model then we could use this prediced values to draw contour map and this contour map can represent svm s partition boundary plot contour map which represents the partition boundary plot all train samples plot support vectors
import os import sys import urllib.request import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn.datasets import make_blobs, make_circles from sklearn.preprocessing import StandardScaler CANCER_DATASET_URL = ( "https://archive.ics.uci.edu/ml/machine-learning-databases/" "breast-cancer-wisconsin/wdbc.data" ) class SmoSVM: def __init__( self, train, kernel_func, alpha_list=None, cost=0.4, b=0.0, tolerance=0.001, auto_norm=True, ): self._init = True self._auto_norm = auto_norm self._c = np.float64(cost) self._b = np.float64(b) self._tol = np.float64(tolerance) if tolerance > 0.0001 else np.float64(0.001) self.tags = train[:, 0] self.samples = self._norm(train[:, 1:]) if self._auto_norm else train[:, 1:] self.alphas = alpha_list if alpha_list is not None else np.zeros(train.shape[0]) self.Kernel = kernel_func self._eps = 0.001 self._all_samples = list(range(self.length)) self._K_matrix = self._calculate_k_matrix() self._error = np.zeros(self.length) self._unbound = [] self.choose_alpha = self._choose_alphas() def fit(self): k = self._k state = None while True: try: i1, i2 = self.choose_alpha.send(state) state = None except StopIteration: print("Optimization done!\nEvery sample satisfy the KKT condition!") break y1, y2 = self.tags[i1], self.tags[i2] a1, a2 = self.alphas[i1].copy(), self.alphas[i2].copy() e1, e2 = self._e(i1), self._e(i2) args = (i1, i2, a1, a2, e1, e2, y1, y2) a1_new, a2_new = self._get_new_alpha(*args) if not a1_new and not a2_new: state = False continue self.alphas[i1], self.alphas[i2] = a1_new, a2_new b1_new = np.float64( -e1 - y1 * k(i1, i1) * (a1_new - a1) - y2 * k(i2, i1) * (a2_new - a2) + self._b ) b2_new = np.float64( -e2 - y2 * k(i2, i2) * (a2_new - a2) - y1 * k(i1, i2) * (a1_new - a1) + self._b ) if 0.0 < a1_new < self._c: b = b1_new if 0.0 < a2_new < self._c: b = b2_new if not (np.float64(0) < a2_new < self._c) and not ( np.float64(0) < a1_new < self._c ): b = (b1_new + b2_new) / 2.0 b_old = self._b self._b = b self._unbound = [i for i in self._all_samples if self._is_unbound(i)] for s in self.unbound: if s in (i1, i2): continue self._error[s] += ( y1 * (a1_new - a1) * k(i1, s) + y2 * (a2_new - a2) * k(i2, s) + (self._b - b_old) ) if self._is_unbound(i1): self._error[i1] = 0 if self._is_unbound(i2): self._error[i2] = 0 def predict(self, test_samples, classify=True): if test_samples.shape[1] > self.samples.shape[1]: raise ValueError( "Test samples' feature length does not equal to that of train samples" ) if self._auto_norm: test_samples = self._norm(test_samples) results = [] for test_sample in test_samples: result = self._predict(test_sample) if classify: results.append(1 if result > 0 else -1) else: results.append(result) return np.array(results) def _check_obey_kkt(self, index): alphas = self.alphas tol = self._tol r = self._e(index) * self.tags[index] c = self._c return (r < -tol and alphas[index] < c) or (r > tol and alphas[index] > 0.0) def _k(self, i1, i2): if isinstance(i2, np.ndarray): return self.Kernel(self.samples[i1], i2) else: return self._K_matrix[i1, i2] def _e(self, index): if self._is_unbound(index): return self._error[index] else: gx = np.dot(self.alphas * self.tags, self._K_matrix[:, index]) + self._b yi = self.tags[index] return gx - yi def _calculate_k_matrix(self): k_matrix = np.zeros([self.length, self.length]) for i in self._all_samples: for j in self._all_samples: k_matrix[i, j] = np.float64( self.Kernel(self.samples[i, :], self.samples[j, :]) ) return k_matrix def _predict(self, sample): k = self._k predicted_value = ( np.sum( [ self.alphas[i1] * self.tags[i1] * k(i1, sample) for i1 in self._all_samples ] ) + self._b ) return predicted_value def _choose_alphas(self): locis = yield from self._choose_a1() if not locis: return None return locis def _choose_a1(self): while True: all_not_obey = True print("scanning all sample!") for i1 in [i for i in self._all_samples if self._check_obey_kkt(i)]: all_not_obey = False yield from self._choose_a2(i1) print("scanning non-bound sample!") while True: not_obey = True for i1 in [ i for i in self._all_samples if self._check_obey_kkt(i) and self._is_unbound(i) ]: not_obey = False yield from self._choose_a2(i1) if not_obey: print("all non-bound samples fit the KKT condition!") break if all_not_obey: print("all samples fit the KKT condition! Optimization done!") break return False def _choose_a2(self, i1): self._unbound = [i for i in self._all_samples if self._is_unbound(i)] if len(self.unbound) > 0: tmp_error = self._error.copy().tolist() tmp_error_dict = { index: value for index, value in enumerate(tmp_error) if self._is_unbound(index) } if self._e(i1) >= 0: i2 = min(tmp_error_dict, key=lambda index: tmp_error_dict[index]) else: i2 = max(tmp_error_dict, key=lambda index: tmp_error_dict[index]) cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self.unbound, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return for i2 in np.roll(self._all_samples, np.random.choice(self.length)): cmd = yield i1, i2 if cmd is None: return def _get_new_alpha(self, i1, i2, a1, a2, e1, e2, y1, y2): k = self._k if i1 == i2: return None, None s = y1 * y2 if s == -1: l, h = max(0.0, a2 - a1), min(self._c, self._c + a2 - a1) else: l, h = max(0.0, a2 + a1 - self._c), min(self._c, a2 + a1) if l == h: return None, None k11 = k(i1, i1) k22 = k(i2, i2) k12 = k(i1, i2) if (eta := k11 + k22 - 2.0 * k12) > 0.0: a2_new_unc = a2 + (y2 * (e1 - e2)) / eta if a2_new_unc >= h: a2_new = h elif a2_new_unc <= l: a2_new = l else: a2_new = a2_new_unc else: b = self._b l1 = a1 + s * (a2 - l) h1 = a1 + s * (a2 - h) f1 = y1 * (e1 + b) - a1 * k(i1, i1) - s * a2 * k(i1, i2) f2 = y2 * (e2 + b) - a2 * k(i2, i2) - s * a1 * k(i1, i2) ol = ( l1 * f1 + l * f2 + 1 / 2 * l1**2 * k(i1, i1) + 1 / 2 * l**2 * k(i2, i2) + s * l * l1 * k(i1, i2) ) oh = ( h1 * f1 + h * f2 + 1 / 2 * h1**2 * k(i1, i1) + 1 / 2 * h**2 * k(i2, i2) + s * h * h1 * k(i1, i2) ) if ol < (oh - self._eps): a2_new = l elif ol > oh + self._eps: a2_new = h else: a2_new = a2 a1_new = a1 + s * (a2 - a2_new) if a1_new < 0: a2_new += s * a1_new a1_new = 0 if a1_new > self._c: a2_new += s * (a1_new - self._c) a1_new = self._c return a1_new, a2_new def _norm(self, data): if self._init: self._min = np.min(data, axis=0) self._max = np.max(data, axis=0) self._init = False return (data - self._min) / (self._max - self._min) else: return (data - self._min) / (self._max - self._min) def _is_unbound(self, index): return bool(0.0 < self.alphas[index] < self._c) def _is_support(self, index): return bool(self.alphas[index] > 0) @property def unbound(self): return self._unbound @property def support(self): return [i for i in range(self.length) if self._is_support(i)] @property def length(self): return self.samples.shape[0] class Kernel: def __init__(self, kernel, degree=1.0, coef0=0.0, gamma=1.0): self.degree = np.float64(degree) self.coef0 = np.float64(coef0) self.gamma = np.float64(gamma) self._kernel_name = kernel self._kernel = self._get_kernel(kernel_name=kernel) self._check() def _polynomial(self, v1, v2): return (self.gamma * np.inner(v1, v2) + self.coef0) ** self.degree def _linear(self, v1, v2): return np.inner(v1, v2) + self.coef0 def _rbf(self, v1, v2): return np.exp(-1 * (self.gamma * np.linalg.norm(v1 - v2) ** 2)) def _check(self): if self._kernel == self._rbf and self.gamma < 0: raise ValueError("gamma value must greater than 0") def _get_kernel(self, kernel_name): maps = {"linear": self._linear, "poly": self._polynomial, "rbf": self._rbf} return maps[kernel_name] def __call__(self, v1, v2): return self._kernel(v1, v2) def __repr__(self): return self._kernel_name def count_time(func): def call_func(*args, **kwargs): import time start_time = time.time() func(*args, **kwargs) end_time = time.time() print(f"smo algorithm cost {end_time - start_time} seconds") return call_func @count_time def test_cancel_data(): print("Hello!\nStart test svm by smo algorithm!") if not os.path.exists(r"cancel_data.csv"): request = urllib.request.Request( CANCER_DATASET_URL, headers={"User-Agent": "Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)"}, ) response = urllib.request.urlopen(request) content = response.read().decode("utf-8") with open(r"cancel_data.csv", "w") as f: f.write(content) data = pd.read_csv(r"cancel_data.csv", header=None) del data[data.columns.tolist()[0]] data = data.dropna(axis=0) data = data.replace({"M": np.float64(1), "B": np.float64(-1)}) samples = np.array(data)[:, :] train_data, test_data = samples[:328, :], samples[328:, :] test_tags, test_samples = test_data[:, 0], test_data[:, 1:] mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) al = np.zeros(train_data.shape[0]) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, alpha_list=al, cost=0.4, b=0.0, tolerance=0.001, ) mysvm.fit() predict = mysvm.predict(test_samples) score = 0 test_num = test_tags.shape[0] for i in range(test_tags.shape[0]): if test_tags[i] == predict[i]: score += 1 print(f"\nall: {test_num}\nright: {score}\nfalse: {test_num - score}") print(f"Rough Accuracy: {score / test_tags.shape[0]}") def test_demonstration(): print("\nStart plot,please wait!!!") sys.stdout = open(os.devnull, "w") ax1 = plt.subplot2grid((2, 2), (0, 0)) ax2 = plt.subplot2grid((2, 2), (0, 1)) ax3 = plt.subplot2grid((2, 2), (1, 0)) ax4 = plt.subplot2grid((2, 2), (1, 1)) ax1.set_title("linear svm,cost:0.1") test_linear_kernel(ax1, cost=0.1) ax2.set_title("linear svm,cost:500") test_linear_kernel(ax2, cost=500) ax3.set_title("rbf kernel svm,cost:0.1") test_rbf_kernel(ax3, cost=0.1) ax4.set_title("rbf kernel svm,cost:500") test_rbf_kernel(ax4, cost=500) sys.stdout = sys.__stdout__ print("Plot done!!!") def test_linear_kernel(ax, cost): train_x, train_y = make_blobs( n_samples=500, centers=2, n_features=2, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="linear", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def test_rbf_kernel(ax, cost): train_x, train_y = make_circles( n_samples=500, noise=0.1, factor=0.1, random_state=1 ) train_y[train_y == 0] = -1 scaler = StandardScaler() train_x_scaled = scaler.fit_transform(train_x, train_y) train_data = np.hstack((train_y.reshape(500, 1), train_x_scaled)) mykernel = Kernel(kernel="rbf", degree=5, coef0=1, gamma=0.5) mysvm = SmoSVM( train=train_data, kernel_func=mykernel, cost=cost, tolerance=0.001, auto_norm=False, ) mysvm.fit() plot_partition_boundary(mysvm, train_data, ax=ax) def plot_partition_boundary( model, train_data, ax, resolution=100, colors=("b", "k", "r") ): train_data_x = train_data[:, 1] train_data_y = train_data[:, 2] train_data_tags = train_data[:, 0] xrange = np.linspace(train_data_x.min(), train_data_x.max(), resolution) yrange = np.linspace(train_data_y.min(), train_data_y.max(), resolution) test_samples = np.array([(x, y) for x in xrange for y in yrange]).reshape( resolution * resolution, 2 ) test_tags = model.predict(test_samples, classify=False) grid = test_tags.reshape((len(xrange), len(yrange))) ax.contour( xrange, yrange, np.mat(grid).T, levels=(-1, 0, 1), linestyles=("--", "-", "--"), linewidths=(1, 1, 1), colors=colors, ) ax.scatter( train_data_x, train_data_y, c=train_data_tags, cmap=plt.cm.Dark2, lw=0, alpha=0.5, ) support = model.support ax.scatter( train_data_x[support], train_data_y[support], c=train_data_tags[support], cmap=plt.cm.Dark2, ) if __name__ == "__main__": test_cancel_data() test_demonstration() plt.show()
similarity search https en wikipedia orgwikisimilaritysearch similarity search is a search algorithm for finding the nearest vector from vectors used in natural language processing in this algorithm it calculates distance with euclidean distance and returns a list containing two data for each vector 1 the nearest vector 2 distance between the vector and the nearest vector float calculates euclidean distance between two data param inputa ndarray of first vector param inputb ndarray of second vector return euclidean distance of inputa and inputb by using math sqrt result will be float euclideannp array0 np array1 1 0 euclideannp array0 1 np array1 1 1 0 euclideannp array0 0 0 np array0 0 1 1 0 param dataset set containing the vectors should be ndarray param valuearray vectorvectors we want to know the nearest vector from dataset return result will be a list containing 1 the nearest vector 2 distance from the vector dataset np array0 1 2 valuearray np array0 similaritysearchdataset valuearray 0 0 0 dataset np array0 0 1 1 2 2 valuearray np array0 1 similaritysearchdataset valuearray 0 0 1 0 dataset np array0 0 0 1 1 1 2 2 2 valuearray np array0 0 1 similaritysearchdataset valuearray 0 0 0 1 0 dataset np array0 0 0 1 1 1 2 2 2 valuearray np array0 0 0 0 0 1 similaritysearchdataset valuearray 0 0 0 0 0 0 0 0 1 0 these are the errors that might occur 1 if dimensions are different for example dataset has 2d array and valuearray has 1d array dataset np array1 valuearray np array1 similaritysearchdataset valuearray traceback most recent call last valueerror wrong input data s dimensions dataset 2 valuearray 1 2 if data s shapes are different for example dataset has shape of 3 2 and valuearray has 2 3 we are expecting same shapes of two arrays so it is wrong dataset np array0 0 1 1 2 2 valuearray np array0 0 0 0 0 1 similaritysearchdataset valuearray traceback most recent call last valueerror wrong input data s shape dataset 2 valuearray 3 3 if data types are different when trying to compare we are expecting same types so they should be same if not it ll come up with errors dataset np array0 0 1 1 2 2 dtypenp float32 valuearray np array0 0 0 1 dtypenp int32 similaritysearchdataset valuearray doctest normalizewhitespace traceback most recent call last typeerror input data have different datatype dataset float32 valuearray int32 calculates cosine similarity between two data param inputa ndarray of first vector param inputb ndarray of second vector return cosine similarity of inputa and inputb by using math sqrt result will be float cosinesimilaritynp array1 np array1 1 0 cosinesimilaritynp array1 2 np array6 32 0 9615239476408232 calculates euclidean distance between two data param input_a ndarray of first vector param input_b ndarray of second vector return euclidean distance of input_a and input_b by using math sqrt result will be float euclidean np array 0 np array 1 1 0 euclidean np array 0 1 np array 1 1 1 0 euclidean np array 0 0 0 np array 0 0 1 1 0 param dataset set containing the vectors should be ndarray param value_array vector vectors we want to know the nearest vector from dataset return result will be a list containing 1 the nearest vector 2 distance from the vector dataset np array 0 1 2 value_array np array 0 similarity_search dataset value_array 0 0 0 dataset np array 0 0 1 1 2 2 value_array np array 0 1 similarity_search dataset value_array 0 0 1 0 dataset np array 0 0 0 1 1 1 2 2 2 value_array np array 0 0 1 similarity_search dataset value_array 0 0 0 1 0 dataset np array 0 0 0 1 1 1 2 2 2 value_array np array 0 0 0 0 0 1 similarity_search dataset value_array 0 0 0 0 0 0 0 0 1 0 these are the errors that might occur 1 if dimensions are different for example dataset has 2d array and value_array has 1d array dataset np array 1 value_array np array 1 similarity_search dataset value_array traceback most recent call last valueerror wrong input data s dimensions dataset 2 value_array 1 2 if data s shapes are different for example dataset has shape of 3 2 and value_array has 2 3 we are expecting same shapes of two arrays so it is wrong dataset np array 0 0 1 1 2 2 value_array np array 0 0 0 0 0 1 similarity_search dataset value_array traceback most recent call last valueerror wrong input data s shape dataset 2 value_array 3 3 if data types are different when trying to compare we are expecting same types so they should be same if not it ll come up with errors dataset np array 0 0 1 1 2 2 dtype np float32 value_array np array 0 0 0 1 dtype np int32 similarity_search dataset value_array doctest normalize_whitespace traceback most recent call last typeerror input data have different datatype dataset float32 value_array int32 calculates cosine similarity between two data param input_a ndarray of first vector param input_b ndarray of second vector return cosine similarity of input_a and input_b by using math sqrt result will be float cosine_similarity np array 1 np array 1 1 0 cosine_similarity np array 1 2 np array 6 32 0 9615239476408232
from __future__ import annotations import math import numpy as np from numpy.linalg import norm def euclidean(input_a: np.ndarray, input_b: np.ndarray) -> float: return math.sqrt(sum(pow(a - b, 2) for a, b in zip(input_a, input_b))) def similarity_search( dataset: np.ndarray, value_array: np.ndarray ) -> list[list[list[float] | float]]: if dataset.ndim != value_array.ndim: msg = ( "Wrong input data's dimensions... " f"dataset : {dataset.ndim}, value_array : {value_array.ndim}" ) raise ValueError(msg) try: if dataset.shape[1] != value_array.shape[1]: msg = ( "Wrong input data's shape... " f"dataset : {dataset.shape[1]}, value_array : {value_array.shape[1]}" ) raise ValueError(msg) except IndexError: if dataset.ndim != value_array.ndim: raise TypeError("Wrong shape") if dataset.dtype != value_array.dtype: msg = ( "Input data have different datatype... " f"dataset : {dataset.dtype}, value_array : {value_array.dtype}" ) raise TypeError(msg) answer = [] for value in value_array: dist = euclidean(value, dataset[0]) vector = dataset[0].tolist() for dataset_value in dataset[1:]: temp_dist = euclidean(value, dataset_value) if dist > temp_dist: dist = temp_dist vector = dataset_value.tolist() answer.append([vector, dist]) return answer def cosine_similarity(input_a: np.ndarray, input_b: np.ndarray) -> float: return np.dot(input_a, input_b) / (norm(input_a) * norm(input_b)) if __name__ == "__main__": import doctest doctest.testmod()
return the squared second norm of vector normsquaredv sumx x for x in v args vector ndarray input vector returns float squared second norm of vector normsquared1 2 5 normsquarednp asarray1 2 5 normsquared0 0 0 support vector classifier args kernel str kernel to use default linear possible choices linear regularization constraint for soft margin data not linearly separable default unbound svckernelasdf traceback most recent call last valueerror unknown kernel asdf svckernelrbf traceback most recent call last valueerror rbf kernel requires gamma svckernelrbf gamma1 traceback most recent call last valueerror gamma must be 0 in the future there could be a default value like in sklearn sklear defgamma 1nfeatures x var wiki previously it was 1nfeatures kernels linear kernel as if no kernel used at all return np dotvector1 vector2 def rbfself vector1 ndarray vector2 ndarray float return np expself gamma normsquaredvector1 vector2 def fitself observations listndarray classes ndarray none self observations observations self classes classes using wolfe s dual to calculate w primal problem minimize 12normsquaredw constraint ynw xn b 1 with l a vector dual problem maximize sumnln 12 sumnsummlnlmynymxn xm constraint self c ln 0 and sumnlnyn 0 then we get w using w sumnlnynxn at the end we can get b meanyn w xn since we use kernels we only need lstar to calculate b and to classify observations n np shapeclasses def tominimizecandidate ndarray float s 0 n np shapecandidate for i in rangen for j in rangen s candidatei candidatej classesi classesj self kernelobservationsi observationsj return 1 2 s sumcandidate lycontraint linearconstraintclasses 0 0 lbounds bounds0 self regularization lstar minimize tominimize np onesn boundslbounds constraintslycontraint x self optimum lstar calculating mean offset of separation plane to points s 0 for i in rangen for j in rangen s classesi classesi self optimumi self kernel observationsi observationsj self offset s n def predictself observation ndarray int s sum self optimumn self classesn self kernelself observationsn observation for n in rangelenself classes return 1 if s self offset 0 else 1 if name main import doctest doctest testmod return the squared second norm of vector norm_squared v sum x x for x in v args vector ndarray input vector returns float squared second norm of vector norm_squared 1 2 5 norm_squared np asarray 1 2 5 norm_squared 0 0 0 support vector classifier args kernel str kernel to use default linear possible choices linear regularization constraint for soft margin data not linearly separable default unbound svc kernel asdf traceback most recent call last valueerror unknown kernel asdf svc kernel rbf traceback most recent call last valueerror rbf kernel requires gamma svc kernel rbf gamma 1 traceback most recent call last valueerror gamma must be 0 in the future there could be a default value like in sklearn sklear def_gamma 1 n_features x var wiki previously it was 1 n_features kernels linear kernel as if no kernel used at all rbf radial basis function kernel note for more information see https en wikipedia org wiki radial_basis_function_kernel args vector1 ndarray first vector vector2 ndarray second vector returns float exp gamma norm_squared vector1 vector2 fits the svc with a set of observations args observations list ndarray list of observations classes ndarray classification of each observation in 1 1 using wolfe s dual to calculate w primal problem minimize 1 2 norm_squared w constraint yn w xn b 1 with l a vector dual problem maximize sum_n ln 1 2 sum_n sum_m ln lm yn ym xn xm constraint self c ln 0 and sum_n ln yn 0 then we get w using w sum_n ln yn xn at the end we can get b mean yn w xn since we use kernels we only need l_star to calculate b and to classify observations opposite of the function to maximize args candidate ndarray candidate array to test return float wolfe s dual result to minimize calculating mean offset of separation plane to points get the expected class of an observation args observation vector observation returns int 1 1 expected class xs np asarray 0 1 np asarray 0 2 np asarray 1 1 np asarray 1 2 y np asarray 1 1 1 1 s svc s fit xs y s predict np asarray 0 1 1 s predict np asarray 1 1 1 s predict np asarray 2 2 1
import numpy as np from numpy import ndarray from scipy.optimize import Bounds, LinearConstraint, minimize def norm_squared(vector: ndarray) -> float: return np.dot(vector, vector) class SVC: def __init__( self, *, regularization: float = np.inf, kernel: str = "linear", gamma: float = 0.0, ) -> None: self.regularization = regularization self.gamma = gamma if kernel == "linear": self.kernel = self.__linear elif kernel == "rbf": if self.gamma == 0: raise ValueError("rbf kernel requires gamma") if not isinstance(self.gamma, (float, int)): raise ValueError("gamma must be float or int") if not self.gamma > 0: raise ValueError("gamma must be > 0") self.kernel = self.__rbf else: msg = f"Unknown kernel: {kernel}" raise ValueError(msg) def __linear(self, vector1: ndarray, vector2: ndarray) -> float: return np.dot(vector1, vector2) def __rbf(self, vector1: ndarray, vector2: ndarray) -> float: return np.exp(-(self.gamma * norm_squared(vector1 - vector2))) def fit(self, observations: list[ndarray], classes: ndarray) -> None: self.observations = observations self.classes = classes (n,) = np.shape(classes) def to_minimize(candidate: ndarray) -> float: s = 0 (n,) = np.shape(candidate) for i in range(n): for j in range(n): s += ( candidate[i] * candidate[j] * classes[i] * classes[j] * self.kernel(observations[i], observations[j]) ) return 1 / 2 * s - sum(candidate) ly_contraint = LinearConstraint(classes, 0, 0) l_bounds = Bounds(0, self.regularization) l_star = minimize( to_minimize, np.ones(n), bounds=l_bounds, constraints=[ly_contraint] ).x self.optimum = l_star s = 0 for i in range(n): for j in range(n): s += classes[i] - classes[i] * self.optimum[i] * self.kernel( observations[i], observations[j] ) self.offset = s / n def predict(self, observation: ndarray) -> int: s = sum( self.optimum[n] * self.classes[n] * self.kernel(self.observations[n], observation) for n in range(len(self.classes)) ) return 1 if s + self.offset >= 0 else -1 if __name__ == "__main__": import doctest doctest.testmod()
tfidf wikipedia https en wikipedia orgwikitfe28093idf tfidf and other word frequency algorithms are often used as a weighting factor in information retrieval and text mining 83 of textbased recommender systems use tfidf for term weighting in layman s terms tfidf is a statistic intended to reflect how important a word is to a document in a corpus a collection of documents here i ve implemented several word frequency algorithms that are commonly used in information retrieval term frequency document frequency and tfidf termfrequencyinversedocumentfrequency are included term frequency is a statistical function that returns a number representing how frequently an expression occurs in a document this indicates how significant a particular term is in a given document document frequency is a statistical function that returns an integer representing the number of documents in a corpus that a term occurs in where the max number returned would be the number of documents in the corpus inverse document frequency is mathematically written as log10ndf where n is the number of documents in your corpus and df is the document frequency if df is 0 a zerodivisionerror will be thrown termfrequencyinversedocumentfrequency is a measure of the originality of a term it is mathematically written as tflog10ndf it compares the number of times a term appears in a document with the number of documents the term appears in if df is 0 a zerodivisionerror will be thrown return the number of times a term occurs within a given document params term the term to search a document for and document the document to search within returns an integer representing the number of times a term is found within the document examples termfrequencyto to be or not to be 2 strip all punctuation and newlines and replace it with calculate the number of documents in a corpus that contain a given term params term the term to search each document for and corpus a collection of documents each document should be separated by a newline returns the number of documents in the corpus that contain the term you are searching for and the number of documents in the corpus examples documentfrequencyfirst this is the first document in the corpus nthis is the second document in the corpus nthis is the third document in the corpus 1 3 return an integer denoting the importance of a word this measure of importance is calculated by log10ndf where n is the number of documents and df is the document frequency params df the document frequency n the number of documents in the corpus and smoothing if true return the idfsmooth returns log10ndf or 1log10n1df examples inversedocumentfrequency3 0 traceback most recent call last valueerror log100 is undefined inversedocumentfrequency1 3 0 477 inversedocumentfrequency0 3 traceback most recent call last zerodivisionerror df must be 0 inversedocumentfrequency0 3 true 1 477 combine the term frequency and inverse document frequency functions to calculate the originality of a term this originality is calculated by multiplying the term frequency and the inverse document frequency tfidf tf idf params tf the term frequency and idf the inverse document frequency examples tfidf2 0 477 0 954 tf idf wikipedia https en wikipedia org wiki tf e2 80 93idf tf idf and other word frequency algorithms are often used as a weighting factor in information retrieval and text mining 83 of text based recommender systems use tf idf for term weighting in layman s terms tf idf is a statistic intended to reflect how important a word is to a document in a corpus a collection of documents here i ve implemented several word frequency algorithms that are commonly used in information retrieval term frequency document frequency and tf idf term frequency inverse document frequency are included term frequency is a statistical function that returns a number representing how frequently an expression occurs in a document this indicates how significant a particular term is in a given document document frequency is a statistical function that returns an integer representing the number of documents in a corpus that a term occurs in where the max number returned would be the number of documents in the corpus inverse document frequency is mathematically written as log10 n df where n is the number of documents in your corpus and df is the document frequency if df is 0 a zerodivisionerror will be thrown term frequency inverse document frequency is a measure of the originality of a term it is mathematically written as tf log10 n df it compares the number of times a term appears in a document with the number of documents the term appears in if df is 0 a zerodivisionerror will be thrown return the number of times a term occurs within a given document params term the term to search a document for and document the document to search within returns an integer representing the number of times a term is found within the document examples term_frequency to to be or not to be 2 strip all punctuation and newlines and replace it with word tokenization calculate the number of documents in a corpus that contain a given term params term the term to search each document for and corpus a collection of documents each document should be separated by a newline returns the number of documents in the corpus that contain the term you are searching for and the number of documents in the corpus examples document_frequency first this is the first document in the corpus nthis is the second document in the corpus nthis is the third document in the corpus 1 3 strip all punctuation and replace it with return an integer denoting the importance of a word this measure of importance is calculated by log10 n df where n is the number of documents and df is the document frequency params df the document frequency n the number of documents in the corpus and smoothing if true return the idf smooth returns log10 n df or 1 log10 n 1 df examples inverse_document_frequency 3 0 traceback most recent call last valueerror log10 0 is undefined inverse_document_frequency 1 3 0 477 inverse_document_frequency 0 3 traceback most recent call last zerodivisionerror df must be 0 inverse_document_frequency 0 3 true 1 477 combine the term frequency and inverse document frequency functions to calculate the originality of a term this originality is calculated by multiplying the term frequency and the inverse document frequency tf idf tf idf params tf the term frequency and idf the inverse document frequency examples tf_idf 2 0 477 0 954
import string from math import log10 def term_frequency(term: str, document: str) -> int: document_without_punctuation = document.translate( str.maketrans("", "", string.punctuation) ).replace("\n", "") tokenize_document = document_without_punctuation.split(" ") return len([word for word in tokenize_document if word.lower() == term.lower()]) def document_frequency(term: str, corpus: str) -> tuple[int, int]: corpus_without_punctuation = corpus.lower().translate( str.maketrans("", "", string.punctuation) ) docs = corpus_without_punctuation.split("\n") term = term.lower() return (len([doc for doc in docs if term in doc]), len(docs)) def inverse_document_frequency(df: int, n: int, smoothing=False) -> float: if smoothing: if n == 0: raise ValueError("log10(0) is undefined.") return round(1 + log10(n / (1 + df)), 3) if df == 0: raise ZeroDivisionError("df must be > 0") elif n == 0: raise ValueError("log10(0) is undefined.") return round(log10(n / df), 3) def tf_idf(tf: int, idf: int) -> float: return round(tf * idf, 3)
xgboost classifier example split dataset into features and target data is features datahandling data 5 1 3 5 1 4 0 2 target 0 5 1 3 5 1 4 0 2 0 datahandling data 4 9 3 0 1 4 0 2 4 7 3 2 1 3 0 2 target 0 0 4 9 3 0 1 4 0 2 4 7 3 2 1 3 0 2 0 0 this test is broken xgboostnp array5 1 3 6 1 4 0 2 np array0 xgbclassifierbasescore0 5 booster gbtree callbacksnone colsamplebylevel1 colsamplebynode1 colsamplebytree1 earlystoppingroundsnone enablecategoricalfalse evalmetricnone gamma0 gpuid1 growpolicy depthwise importancetypenone interactionconstraints learningrate0 300000012 maxbin256 maxcattoonehot4 maxdeltastep0 maxdepth6 maxleaves0 minchildweight1 missingnan monotoneconstraints nestimators100 njobs0 numparalleltree1 predictor auto randomstate0 regalpha0 reglambda1 main url for the algorithm https xgboost readthedocs ioenstable iris type dataset is used to demonstrate algorithm load iris dataset create an xgboost classifier from the training data display the confusion matrix of the classifier with both training and test sets xgboost classifier example split dataset into features and target data is features data_handling data 5 1 3 5 1 4 0 2 target 0 5 1 3 5 1 4 0 2 0 data_handling data 4 9 3 0 1 4 0 2 4 7 3 2 1 3 0 2 target 0 0 4 9 3 0 1 4 0 2 4 7 3 2 1 3 0 2 0 0 this test is broken xgboost np array 5 1 3 6 1 4 0 2 np array 0 xgbclassifier base_score 0 5 booster gbtree callbacks none colsample_bylevel 1 colsample_bynode 1 colsample_bytree 1 early_stopping_rounds none enable_categorical false eval_metric none gamma 0 gpu_id 1 grow_policy depthwise importance_type none interaction_constraints learning_rate 0 300000012 max_bin 256 max_cat_to_onehot 4 max_delta_step 0 max_depth 6 max_leaves 0 min_child_weight 1 missing nan monotone_constraints n_estimators 100 n_jobs 0 num_parallel_tree 1 predictor auto random_state 0 reg_alpha 0 reg_lambda 1 main url for the algorithm https xgboost readthedocs io en stable iris type dataset is used to demonstrate algorithm load iris dataset create an xgboost classifier from the training data display the confusion matrix of the classifier with both training and test sets
import numpy as np from matplotlib import pyplot as plt from sklearn.datasets import load_iris from sklearn.metrics import ConfusionMatrixDisplay from sklearn.model_selection import train_test_split from xgboost import XGBClassifier def data_handling(data: dict) -> tuple: return (data["data"], data["target"]) def xgboost(features: np.ndarray, target: np.ndarray) -> XGBClassifier: classifier = XGBClassifier() classifier.fit(features, target) return classifier def main() -> None: iris = load_iris() features, targets = data_handling(iris) x_train, x_test, y_train, y_test = train_test_split( features, targets, test_size=0.25 ) names = iris["target_names"] xgboost_classifier = xgboost(x_train, y_train) ConfusionMatrixDisplay.from_estimator( xgboost_classifier, x_test, y_test, display_labels=names, cmap="Blues", normalize="true", ) plt.title("Normalized Confusion Matrix - IRIS Dataset") plt.show() if __name__ == "__main__": import doctest doctest.testmod(verbose=True) main()
xgboost regressor example split dataset into features and target data is features datahandling data 8 3252 41 6 9841269 1 02380952 322 2 55555556 37 88 122 23 target 4 526 8 3252 41 6 9841269 1 02380952 322 2 55555556 37 88 122 23 4 526 xgboostnp array 2 3571 52 6 00813008 1 06775068 907 2 45799458 40 58 124 26 np array1 114 np array1 97840000e00 3 70000000e01 4 98858447e00 1 03881279e00 1 14300000e03 2 60958904e00 3 67800000e01 1 19780000e02 array1 1139996 dtypefloat32 predict target for test data the url for this algorithm https xgboost readthedocs ioenstable california house price dataset is used to demonstrate the algorithm expected error values mean absolute error 0 30957163379906033 mean square error 0 22611560196662744 load california house price dataset error printing xgboost regressor example split dataset into features and target data is features data_handling data 8 3252 41 6 9841269 1 02380952 322 2 55555556 37 88 122 23 target 4 526 8 3252 41 6 9841269 1 02380952 322 2 55555556 37 88 122 23 4 526 xgboost np array 2 3571 52 6 00813008 1 06775068 907 2 45799458 40 58 124 26 np array 1 114 np array 1 97840000e 00 3 70000000e 01 4 98858447e 00 1 03881279e 00 1 14300000e 03 2 60958904e 00 3 67800000e 01 1 19780000e 02 array 1 1139996 dtype float32 predict target for test data the url for this algorithm https xgboost readthedocs io en stable california house price dataset is used to demonstrate the algorithm expected error values mean absolute error 0 30957163379906033 mean square error 0 22611560196662744 load california house price dataset error printing
import numpy as np from sklearn.datasets import fetch_california_housing from sklearn.metrics import mean_absolute_error, mean_squared_error from sklearn.model_selection import train_test_split from xgboost import XGBRegressor def data_handling(data: dict) -> tuple: return (data["data"], data["target"]) def xgboost( features: np.ndarray, target: np.ndarray, test_features: np.ndarray ) -> np.ndarray: xgb = XGBRegressor( verbosity=0, random_state=42, tree_method="exact", base_score=0.5 ) xgb.fit(features, target) predictions = xgb.predict(test_features) predictions = predictions.reshape(len(predictions), 1) return predictions def main() -> None: california = fetch_california_housing() data, target = data_handling(california) x_train, x_test, y_train, y_test = train_test_split( data, target, test_size=0.25, random_state=1 ) predictions = xgboost(x_train, y_train, x_test) print(f"Mean Absolute Error: {mean_absolute_error(y_test, predictions)}") print(f"Mean Square Error: {mean_squared_error(y_test, predictions)}") if __name__ == "__main__": import doctest doctest.testmod(verbose=True) main()
absolute value def absvalnum float float return num if num 0 else num def absminx listint int if lenx 0 raise valueerrorabsmin arg is an empty sequence j x0 for i in x if absvali absvalj j i return j def absmaxx listint int if lenx 0 raise valueerrorabsmax arg is an empty sequence j x0 for i in x if absi absj j i return j def absmaxsortx listint int if lenx 0 raise valueerrorabsmaxsort arg is an empty sequence return sortedx keyabs1 def testabsval assert absval0 0 assert absval34 34 assert absval100000000000 100000000000 a 3 1 2 11 assert absmaxa 11 assert absmaxsorta 11 assert absmina 1 if name main import doctest doctest testmod testabsval printabsval34 34 find the absolute value of a number abs_val 5 1 5 1 abs_val 5 abs_val 5 true abs_val 0 0 abs_min 0 5 1 11 0 abs_min 3 10 2 2 abs_min traceback most recent call last valueerror abs_min arg is an empty sequence abs_max 0 5 1 11 11 abs_max 3 10 2 10 abs_max traceback most recent call last valueerror abs_max arg is an empty sequence abs_max_sort 0 5 1 11 11 abs_max_sort 3 10 2 10 abs_max_sort traceback most recent call last valueerror abs_max_sort arg is an empty sequence test_abs_val 34
def abs_val(num: float) -> float: return -num if num < 0 else num def abs_min(x: list[int]) -> int: if len(x) == 0: raise ValueError("abs_min() arg is an empty sequence") j = x[0] for i in x: if abs_val(i) < abs_val(j): j = i return j def abs_max(x: list[int]) -> int: if len(x) == 0: raise ValueError("abs_max() arg is an empty sequence") j = x[0] for i in x: if abs(i) > abs(j): j = i return j def abs_max_sort(x: list[int]) -> int: if len(x) == 0: raise ValueError("abs_max_sort() arg is an empty sequence") return sorted(x, key=abs)[-1] def test_abs_val(): assert abs_val(0) == 0 assert abs_val(34) == 34 assert abs_val(-100000000000) == 100000000000 a = [-3, -1, 2, -11] assert abs_max(a) == -11 assert abs_max_sort(a) == -11 assert abs_min(a) == -1 if __name__ == "__main__": import doctest doctest.testmod() test_abs_val() print(abs_val(-34))
illustrate how to add the integer without arithmetic operation suraj kumar time complexity 1 https en wikipedia orgwikibitwiseoperation implementation of addition of integer examples add3 5 8 add13 5 18 add7 2 5 add0 7 7 add321 0 321 implementation of addition of integer examples add 3 5 8 add 13 5 18 add 7 2 5 add 0 7 7 add 321 0 321
def add(first: int, second: int) -> int: while second != 0: c = first & second first ^= second second = c << 1 return first if __name__ == "__main__": import doctest doctest.testmod() first = int(input("Enter the first number: ").strip()) second = int(input("Enter the second number: ").strip()) print(f"{add(first, second) = }")
finds the aliquot sum of an input integer where the aliquot sum of a number n is defined as the sum of all natural numbers less than n that divide n evenly for example the aliquot sum of 15 is 1 3 5 9 this is a simple on implementation param inputnum a positive integer whose aliquot sum is to be found return the aliquot sum of inputnum if inputnum is positive otherwise raise a valueerror wikipedia explanation https en wikipedia orgwikialiquotsum aliquotsum15 9 aliquotsum6 6 aliquotsum1 traceback most recent call last valueerror input must be positive aliquotsum0 traceback most recent call last valueerror input must be positive aliquotsum1 6 traceback most recent call last valueerror input must be an integer aliquotsum12 16 aliquotsum1 0 aliquotsum19 1 finds the aliquot sum of an input integer where the aliquot sum of a number n is defined as the sum of all natural numbers less than n that divide n evenly for example the aliquot sum of 15 is 1 3 5 9 this is a simple o n implementation param input_num a positive integer whose aliquot sum is to be found return the aliquot sum of input_num if input_num is positive otherwise raise a valueerror wikipedia explanation https en wikipedia org wiki aliquot_sum aliquot_sum 15 9 aliquot_sum 6 6 aliquot_sum 1 traceback most recent call last valueerror input must be positive aliquot_sum 0 traceback most recent call last valueerror input must be positive aliquot_sum 1 6 traceback most recent call last valueerror input must be an integer aliquot_sum 12 16 aliquot_sum 1 0 aliquot_sum 19 1
def aliquot_sum(input_num: int) -> int: if not isinstance(input_num, int): raise ValueError("Input must be an integer") if input_num <= 0: raise ValueError("Input must be positive") return sum( divisor for divisor in range(1, input_num // 2 + 1) if input_num % divisor == 0 ) if __name__ == "__main__": import doctest doctest.testmod()
in a multithreaded download this algorithm could be used to provide each worker thread with a block of nonoverlapping bytes to download for example for i in allocationlist requests geturl headers range f bytesi divide a number of bytes into x partitions param numberofbytes the total of bytes param partitions the number of partition need to be allocated return list of bytes to be assigned to each worker thread allocationnum16647 4 14161 41628322 832312483 1248416647 allocationnum50000 5 110000 1000120000 2000130000 3000140000 4000150000 allocationnum888 999 traceback most recent call last valueerror partitions can not numberofbytes allocationnum888 4 traceback most recent call last valueerror partitions must be a positive number divide a number of bytes into x partitions param number_of_bytes the total of bytes param partitions the number of partition need to be allocated return list of bytes to be assigned to each worker thread allocation_num 16647 4 1 4161 4162 8322 8323 12483 12484 16647 allocation_num 50000 5 1 10000 10001 20000 20001 30000 30001 40000 40001 50000 allocation_num 888 999 traceback most recent call last valueerror partitions can not number_of_bytes allocation_num 888 4 traceback most recent call last valueerror partitions must be a positive number
from __future__ import annotations def allocation_num(number_of_bytes: int, partitions: int) -> list[str]: if partitions <= 0: raise ValueError("partitions must be a positive number!") if partitions > number_of_bytes: raise ValueError("partitions can not > number_of_bytes!") bytes_per_partition = number_of_bytes // partitions allocation_list = [] for i in range(partitions): start_bytes = i * bytes_per_partition + 1 end_bytes = ( number_of_bytes if i == partitions - 1 else (i + 1) * bytes_per_partition ) allocation_list.append(f"{start_bytes}-{end_bytes}") return allocation_list if __name__ == "__main__": import doctest doctest.testmod()
arclength45 5 3 9269908169872414 arclength120 15 31 415926535897928 arclength90 10 15 707963267948966 arc_length 45 5 3 9269908169872414 arc_length 120 15 31 415926535897928 arc_length 90 10 15 707963267948966
from math import pi def arc_length(angle: int, radius: int) -> float: return 2 * pi * radius * (angle / 360) if __name__ == "__main__": print(arc_length(90, 10))
find the area of various geometric shapes wikipedia reference https en wikipedia orgwikiarea calculate the surface area of a cube surfaceareacube1 6 surfaceareacube1 6 15 360000000000003 surfaceareacube0 0 surfaceareacube3 54 surfaceareacube1 traceback most recent call last valueerror surfaceareacube only accepts nonnegative values calculate the surface area of a cuboid surfaceareacuboid1 2 3 22 surfaceareacuboid0 0 0 0 surfaceareacuboid1 6 2 6 3 6 38 56 surfaceareacuboid1 2 3 traceback most recent call last valueerror surfaceareacuboid only accepts nonnegative values surfaceareacuboid1 2 3 traceback most recent call last valueerror surfaceareacuboid only accepts nonnegative values surfaceareacuboid1 2 3 traceback most recent call last valueerror surfaceareacuboid only accepts nonnegative values calculate the surface area of a sphere wikipedia reference https en wikipedia orgwikisphere formula 4 pi r2 surfaceareasphere5 314 1592653589793 surfaceareasphere1 12 566370614359172 surfaceareasphere1 6 32 169908772759484 surfaceareasphere0 0 0 surfaceareasphere1 traceback most recent call last valueerror surfaceareasphere only accepts nonnegative values calculate the surface area of a hemisphere formula 3 pi r2 surfaceareahemisphere5 235 61944901923448 surfaceareahemisphere1 9 42477796076938 surfaceareahemisphere0 0 0 surfaceareahemisphere1 1 11 40398133253095 surfaceareahemisphere1 traceback most recent call last valueerror surfaceareahemisphere only accepts nonnegative values calculate the surface area of a cone wikipedia reference https en wikipedia orgwikicone formula pi r r h 2 r 2 0 5 surfaceareacone10 24 1130 9733552923256 surfaceareacone6 8 301 59289474462014 surfaceareacone1 6 2 6 23 387862992395807 surfaceareacone0 0 0 0 surfaceareacone1 2 traceback most recent call last valueerror surfaceareacone only accepts nonnegative values surfaceareacone1 2 traceback most recent call last valueerror surfaceareacone only accepts nonnegative values surfaceareacone1 2 traceback most recent call last valueerror surfaceareacone only accepts nonnegative values calculate the surface area of a conical frustum surfaceareaconicalfrustum1 2 3 45 511728065337266 surfaceareaconicalfrustum4 5 6 300 7913575056268 surfaceareaconicalfrustum0 0 0 0 0 surfaceareaconicalfrustum1 6 2 6 3 6 78 57907060751548 surfaceareaconicalfrustum1 2 3 traceback most recent call last valueerror surfaceareaconicalfrustum only accepts nonnegative values surfaceareaconicalfrustum1 2 3 traceback most recent call last valueerror surfaceareaconicalfrustum only accepts nonnegative values surfaceareaconicalfrustum1 2 3 traceback most recent call last valueerror surfaceareaconicalfrustum only accepts nonnegative values calculate the surface area of a cylinder wikipedia reference https en wikipedia orgwikicylinder formula 2 pi r h r surfaceareacylinder7 10 747 6990515543707 surfaceareacylinder1 6 2 6 42 22300526424682 surfaceareacylinder0 0 0 0 surfaceareacylinder6 8 527 7875658030853 surfaceareacylinder1 2 traceback most recent call last valueerror surfaceareacylinder only accepts nonnegative values surfaceareacylinder1 2 traceback most recent call last valueerror surfaceareacylinder only accepts nonnegative values surfaceareacylinder1 2 traceback most recent call last valueerror surfaceareacylinder only accepts nonnegative values calculate the area of a torus wikipedia reference https en wikipedia orgwikitorus return 4pi2 torusradius tuberadius surfaceareatorus1 1 39 47841760435743 surfaceareatorus4 3 473 7410112522892 surfaceareatorus3 4 traceback most recent call last valueerror surfaceareatorus does not support spindle or self intersecting tori surfaceareatorus1 6 1 6 101 06474906715503 surfaceareatorus0 0 0 0 surfaceareatorus1 1 traceback most recent call last valueerror surfaceareatorus only accepts nonnegative values surfaceareatorus1 1 traceback most recent call last valueerror surfaceareatorus only accepts nonnegative values calculate the area of a rectangle arearectangle10 20 200 arearectangle1 6 2 6 4 16 arearectangle0 0 0 arearectangle1 2 traceback most recent call last valueerror arearectangle only accepts nonnegative values arearectangle1 2 traceback most recent call last valueerror arearectangle only accepts nonnegative values arearectangle1 2 traceback most recent call last valueerror arearectangle only accepts nonnegative values calculate the area of a square areasquare10 100 areasquare0 0 areasquare1 6 2 5600000000000005 areasquare1 traceback most recent call last valueerror areasquare only accepts nonnegative values calculate the area of a triangle given the base and height areatriangle10 10 50 0 areatriangle1 6 2 6 2 08 areatriangle0 0 0 0 areatriangle1 2 traceback most recent call last valueerror areatriangle only accepts nonnegative values areatriangle1 2 traceback most recent call last valueerror areatriangle only accepts nonnegative values areatriangle1 2 traceback most recent call last valueerror areatriangle only accepts nonnegative values calculate area of triangle when the length of 3 sides are known this function uses heron s formula https en wikipedia orgwikiheron27sformula areatrianglethreesides5 12 13 30 0 areatrianglethreesides10 11 12 51 521233486786784 areatrianglethreesides0 0 0 0 0 areatrianglethreesides1 6 2 6 3 6 1 8703742940919619 areatrianglethreesides1 2 1 traceback most recent call last valueerror areatrianglethreesides only accepts nonnegative values areatrianglethreesides1 2 1 traceback most recent call last valueerror areatrianglethreesides only accepts nonnegative values areatrianglethreesides2 4 7 traceback most recent call last valueerror given three sides do not form a triangle areatrianglethreesides2 7 4 traceback most recent call last valueerror given three sides do not form a triangle areatrianglethreesides7 2 4 traceback most recent call last valueerror given three sides do not form a triangle calculate the area of a parallelogram areaparallelogram10 20 200 areaparallelogram1 6 2 6 4 16 areaparallelogram0 0 0 areaparallelogram1 2 traceback most recent call last valueerror areaparallelogram only accepts nonnegative values areaparallelogram1 2 traceback most recent call last valueerror areaparallelogram only accepts nonnegative values areaparallelogram1 2 traceback most recent call last valueerror areaparallelogram only accepts nonnegative values calculate the area of a trapezium areatrapezium10 20 30 450 0 areatrapezium1 6 2 6 3 6 7 5600000000000005 areatrapezium0 0 0 0 0 areatrapezium1 2 3 traceback most recent call last valueerror areatrapezium only accepts nonnegative values areatrapezium1 2 3 traceback most recent call last valueerror areatrapezium only accepts nonnegative values areatrapezium1 2 3 traceback most recent call last valueerror areatrapezium only accepts nonnegative values areatrapezium1 2 3 traceback most recent call last valueerror areatrapezium only accepts nonnegative values areatrapezium1 2 3 traceback most recent call last valueerror areatrapezium only accepts nonnegative values areatrapezium1 2 3 traceback most recent call last valueerror areatrapezium only accepts nonnegative values areatrapezium1 2 3 traceback most recent call last valueerror areatrapezium only accepts nonnegative values calculate the area of a circle areacircle20 1256 6370614359173 areacircle1 6 8 042477193189871 areacircle0 0 0 areacircle1 traceback most recent call last valueerror areacircle only accepts nonnegative values calculate the area of a ellipse areaellipse10 10 314 1592653589793 areaellipse10 20 628 3185307179587 areaellipse0 0 0 0 areaellipse1 6 2 6 13 06902543893354 areaellipse10 20 traceback most recent call last valueerror areaellipse only accepts nonnegative values areaellipse10 20 traceback most recent call last valueerror areaellipse only accepts nonnegative values areaellipse10 20 traceback most recent call last valueerror areaellipse only accepts nonnegative values calculate the area of a rhombus arearhombus10 20 100 0 arearhombus1 6 2 6 2 08 arearhombus0 0 0 0 arearhombus1 2 traceback most recent call last valueerror arearhombus only accepts nonnegative values arearhombus1 2 traceback most recent call last valueerror arearhombus only accepts nonnegative values arearhombus1 2 traceback most recent call last valueerror arearhombus only accepts nonnegative values calculate the area of a regular polygon wikipedia reference https en wikipedia orgwikipolygonregularpolygons formula ns2cotpin4 arearegpolygon3 10 43 301270189221945 arearegpolygon4 10 100 00000000000001 arearegpolygon0 0 traceback most recent call last valueerror arearegpolygon only accepts integers greater than or equal to three as number of sides arearegpolygon1 2 traceback most recent call last valueerror arearegpolygon only accepts integers greater than or equal to three as number of sides arearegpolygon5 2 traceback most recent call last valueerror arearegpolygon only accepts nonnegative values as length of a side arearegpolygon1 2 traceback most recent call last valueerror arearegpolygon only accepts integers greater than or equal to three as number of sides calculate the surface area of a cube surface_area_cube 1 6 surface_area_cube 1 6 15 360000000000003 surface_area_cube 0 0 surface_area_cube 3 54 surface_area_cube 1 traceback most recent call last valueerror surface_area_cube only accepts non negative values calculate the surface area of a cuboid surface_area_cuboid 1 2 3 22 surface_area_cuboid 0 0 0 0 surface_area_cuboid 1 6 2 6 3 6 38 56 surface_area_cuboid 1 2 3 traceback most recent call last valueerror surface_area_cuboid only accepts non negative values surface_area_cuboid 1 2 3 traceback most recent call last valueerror surface_area_cuboid only accepts non negative values surface_area_cuboid 1 2 3 traceback most recent call last valueerror surface_area_cuboid only accepts non negative values calculate the surface area of a sphere wikipedia reference https en wikipedia org wiki sphere formula 4 pi r 2 surface_area_sphere 5 314 1592653589793 surface_area_sphere 1 12 566370614359172 surface_area_sphere 1 6 32 169908772759484 surface_area_sphere 0 0 0 surface_area_sphere 1 traceback most recent call last valueerror surface_area_sphere only accepts non negative values calculate the surface area of a hemisphere formula 3 pi r 2 surface_area_hemisphere 5 235 61944901923448 surface_area_hemisphere 1 9 42477796076938 surface_area_hemisphere 0 0 0 surface_area_hemisphere 1 1 11 40398133253095 surface_area_hemisphere 1 traceback most recent call last valueerror surface_area_hemisphere only accepts non negative values calculate the surface area of a cone wikipedia reference https en wikipedia org wiki cone formula pi r r h 2 r 2 0 5 surface_area_cone 10 24 1130 9733552923256 surface_area_cone 6 8 301 59289474462014 surface_area_cone 1 6 2 6 23 387862992395807 surface_area_cone 0 0 0 0 surface_area_cone 1 2 traceback most recent call last valueerror surface_area_cone only accepts non negative values surface_area_cone 1 2 traceback most recent call last valueerror surface_area_cone only accepts non negative values surface_area_cone 1 2 traceback most recent call last valueerror surface_area_cone only accepts non negative values calculate the surface area of a conical frustum surface_area_conical_frustum 1 2 3 45 511728065337266 surface_area_conical_frustum 4 5 6 300 7913575056268 surface_area_conical_frustum 0 0 0 0 0 surface_area_conical_frustum 1 6 2 6 3 6 78 57907060751548 surface_area_conical_frustum 1 2 3 traceback most recent call last valueerror surface_area_conical_frustum only accepts non negative values surface_area_conical_frustum 1 2 3 traceback most recent call last valueerror surface_area_conical_frustum only accepts non negative values surface_area_conical_frustum 1 2 3 traceback most recent call last valueerror surface_area_conical_frustum only accepts non negative values calculate the surface area of a cylinder wikipedia reference https en wikipedia org wiki cylinder formula 2 pi r h r surface_area_cylinder 7 10 747 6990515543707 surface_area_cylinder 1 6 2 6 42 22300526424682 surface_area_cylinder 0 0 0 0 surface_area_cylinder 6 8 527 7875658030853 surface_area_cylinder 1 2 traceback most recent call last valueerror surface_area_cylinder only accepts non negative values surface_area_cylinder 1 2 traceback most recent call last valueerror surface_area_cylinder only accepts non negative values surface_area_cylinder 1 2 traceback most recent call last valueerror surface_area_cylinder only accepts non negative values calculate the area of a torus wikipedia reference https en wikipedia org wiki torus return 4pi 2 torus_radius tube_radius surface_area_torus 1 1 39 47841760435743 surface_area_torus 4 3 473 7410112522892 surface_area_torus 3 4 traceback most recent call last valueerror surface_area_torus does not support spindle or self intersecting tori surface_area_torus 1 6 1 6 101 06474906715503 surface_area_torus 0 0 0 0 surface_area_torus 1 1 traceback most recent call last valueerror surface_area_torus only accepts non negative values surface_area_torus 1 1 traceback most recent call last valueerror surface_area_torus only accepts non negative values calculate the area of a rectangle area_rectangle 10 20 200 area_rectangle 1 6 2 6 4 16 area_rectangle 0 0 0 area_rectangle 1 2 traceback most recent call last valueerror area_rectangle only accepts non negative values area_rectangle 1 2 traceback most recent call last valueerror area_rectangle only accepts non negative values area_rectangle 1 2 traceback most recent call last valueerror area_rectangle only accepts non negative values calculate the area of a square area_square 10 100 area_square 0 0 area_square 1 6 2 5600000000000005 area_square 1 traceback most recent call last valueerror area_square only accepts non negative values calculate the area of a triangle given the base and height area_triangle 10 10 50 0 area_triangle 1 6 2 6 2 08 area_triangle 0 0 0 0 area_triangle 1 2 traceback most recent call last valueerror area_triangle only accepts non negative values area_triangle 1 2 traceback most recent call last valueerror area_triangle only accepts non negative values area_triangle 1 2 traceback most recent call last valueerror area_triangle only accepts non negative values calculate area of triangle when the length of 3 sides are known this function uses heron s formula https en wikipedia org wiki heron 27s_formula area_triangle_three_sides 5 12 13 30 0 area_triangle_three_sides 10 11 12 51 521233486786784 area_triangle_three_sides 0 0 0 0 0 area_triangle_three_sides 1 6 2 6 3 6 1 8703742940919619 area_triangle_three_sides 1 2 1 traceback most recent call last valueerror area_triangle_three_sides only accepts non negative values area_triangle_three_sides 1 2 1 traceback most recent call last valueerror area_triangle_three_sides only accepts non negative values area_triangle_three_sides 2 4 7 traceback most recent call last valueerror given three sides do not form a triangle area_triangle_three_sides 2 7 4 traceback most recent call last valueerror given three sides do not form a triangle area_triangle_three_sides 7 2 4 traceback most recent call last valueerror given three sides do not form a triangle calculate the area of a parallelogram area_parallelogram 10 20 200 area_parallelogram 1 6 2 6 4 16 area_parallelogram 0 0 0 area_parallelogram 1 2 traceback most recent call last valueerror area_parallelogram only accepts non negative values area_parallelogram 1 2 traceback most recent call last valueerror area_parallelogram only accepts non negative values area_parallelogram 1 2 traceback most recent call last valueerror area_parallelogram only accepts non negative values calculate the area of a trapezium area_trapezium 10 20 30 450 0 area_trapezium 1 6 2 6 3 6 7 5600000000000005 area_trapezium 0 0 0 0 0 area_trapezium 1 2 3 traceback most recent call last valueerror area_trapezium only accepts non negative values area_trapezium 1 2 3 traceback most recent call last valueerror area_trapezium only accepts non negative values area_trapezium 1 2 3 traceback most recent call last valueerror area_trapezium only accepts non negative values area_trapezium 1 2 3 traceback most recent call last valueerror area_trapezium only accepts non negative values area_trapezium 1 2 3 traceback most recent call last valueerror area_trapezium only accepts non negative values area_trapezium 1 2 3 traceback most recent call last valueerror area_trapezium only accepts non negative values area_trapezium 1 2 3 traceback most recent call last valueerror area_trapezium only accepts non negative values calculate the area of a circle area_circle 20 1256 6370614359173 area_circle 1 6 8 042477193189871 area_circle 0 0 0 area_circle 1 traceback most recent call last valueerror area_circle only accepts non negative values calculate the area of a ellipse area_ellipse 10 10 314 1592653589793 area_ellipse 10 20 628 3185307179587 area_ellipse 0 0 0 0 area_ellipse 1 6 2 6 13 06902543893354 area_ellipse 10 20 traceback most recent call last valueerror area_ellipse only accepts non negative values area_ellipse 10 20 traceback most recent call last valueerror area_ellipse only accepts non negative values area_ellipse 10 20 traceback most recent call last valueerror area_ellipse only accepts non negative values calculate the area of a rhombus area_rhombus 10 20 100 0 area_rhombus 1 6 2 6 2 08 area_rhombus 0 0 0 0 area_rhombus 1 2 traceback most recent call last valueerror area_rhombus only accepts non negative values area_rhombus 1 2 traceback most recent call last valueerror area_rhombus only accepts non negative values area_rhombus 1 2 traceback most recent call last valueerror area_rhombus only accepts non negative values calculate the area of a regular polygon wikipedia reference https en wikipedia org wiki polygon regular_polygons formula n s 2 cot pi n 4 area_reg_polygon 3 10 43 301270189221945 area_reg_polygon 4 10 100 00000000000001 area_reg_polygon 0 0 traceback most recent call last valueerror area_reg_polygon only accepts integers greater than or equal to three as number of sides area_reg_polygon 1 2 traceback most recent call last valueerror area_reg_polygon only accepts integers greater than or equal to three as number of sides area_reg_polygon 5 2 traceback most recent call last valueerror area_reg_polygon only accepts non negative values as length of a side area_reg_polygon 1 2 traceback most recent call last valueerror area_reg_polygon only accepts integers greater than or equal to three as number of sides verbose so we can see methods missing tests
from math import pi, sqrt, tan def surface_area_cube(side_length: float) -> float: if side_length < 0: raise ValueError("surface_area_cube() only accepts non-negative values") return 6 * side_length**2 def surface_area_cuboid(length: float, breadth: float, height: float) -> float: if length < 0 or breadth < 0 or height < 0: raise ValueError("surface_area_cuboid() only accepts non-negative values") return 2 * ((length * breadth) + (breadth * height) + (length * height)) def surface_area_sphere(radius: float) -> float: if radius < 0: raise ValueError("surface_area_sphere() only accepts non-negative values") return 4 * pi * radius**2 def surface_area_hemisphere(radius: float) -> float: if radius < 0: raise ValueError("surface_area_hemisphere() only accepts non-negative values") return 3 * pi * radius**2 def surface_area_cone(radius: float, height: float) -> float: if radius < 0 or height < 0: raise ValueError("surface_area_cone() only accepts non-negative values") return pi * radius * (radius + (height**2 + radius**2) ** 0.5) def surface_area_conical_frustum( radius_1: float, radius_2: float, height: float ) -> float: if radius_1 < 0 or radius_2 < 0 or height < 0: raise ValueError( "surface_area_conical_frustum() only accepts non-negative values" ) slant_height = (height**2 + (radius_1 - radius_2) ** 2) ** 0.5 return pi * ((slant_height * (radius_1 + radius_2)) + radius_1**2 + radius_2**2) def surface_area_cylinder(radius: float, height: float) -> float: if radius < 0 or height < 0: raise ValueError("surface_area_cylinder() only accepts non-negative values") return 2 * pi * radius * (height + radius) def surface_area_torus(torus_radius: float, tube_radius: float) -> float: if torus_radius < 0 or tube_radius < 0: raise ValueError("surface_area_torus() only accepts non-negative values") if torus_radius < tube_radius: raise ValueError( "surface_area_torus() does not support spindle or self intersecting tori" ) return 4 * pow(pi, 2) * torus_radius * tube_radius def area_rectangle(length: float, width: float) -> float: if length < 0 or width < 0: raise ValueError("area_rectangle() only accepts non-negative values") return length * width def area_square(side_length: float) -> float: if side_length < 0: raise ValueError("area_square() only accepts non-negative values") return side_length**2 def area_triangle(base: float, height: float) -> float: if base < 0 or height < 0: raise ValueError("area_triangle() only accepts non-negative values") return (base * height) / 2 def area_triangle_three_sides(side1: float, side2: float, side3: float) -> float: if side1 < 0 or side2 < 0 or side3 < 0: raise ValueError("area_triangle_three_sides() only accepts non-negative values") elif side1 + side2 < side3 or side1 + side3 < side2 or side2 + side3 < side1: raise ValueError("Given three sides do not form a triangle") semi_perimeter = (side1 + side2 + side3) / 2 area = sqrt( semi_perimeter * (semi_perimeter - side1) * (semi_perimeter - side2) * (semi_perimeter - side3) ) return area def area_parallelogram(base: float, height: float) -> float: if base < 0 or height < 0: raise ValueError("area_parallelogram() only accepts non-negative values") return base * height def area_trapezium(base1: float, base2: float, height: float) -> float: if base1 < 0 or base2 < 0 or height < 0: raise ValueError("area_trapezium() only accepts non-negative values") return 1 / 2 * (base1 + base2) * height def area_circle(radius: float) -> float: if radius < 0: raise ValueError("area_circle() only accepts non-negative values") return pi * radius**2 def area_ellipse(radius_x: float, radius_y: float) -> float: if radius_x < 0 or radius_y < 0: raise ValueError("area_ellipse() only accepts non-negative values") return pi * radius_x * radius_y def area_rhombus(diagonal_1: float, diagonal_2: float) -> float: if diagonal_1 < 0 or diagonal_2 < 0: raise ValueError("area_rhombus() only accepts non-negative values") return 1 / 2 * diagonal_1 * diagonal_2 def area_reg_polygon(sides: int, length: float) -> float: if not isinstance(sides, int) or sides < 3: raise ValueError( "area_reg_polygon() only accepts integers greater than or \ equal to three as number of sides" ) elif length < 0: raise ValueError( "area_reg_polygon() only accepts non-negative values as \ length of a side" ) return (sides * length**2) / (4 * tan(pi / sides)) return (sides * length**2) / (4 * tan(pi / sides)) if __name__ == "__main__": import doctest doctest.testmod(verbose=True) print("[DEMO] Areas of various geometric shapes: \n") print(f"Rectangle: {area_rectangle(10, 20) = }") print(f"Square: {area_square(10) = }") print(f"Triangle: {area_triangle(10, 10) = }") print(f"Triangle: {area_triangle_three_sides(5, 12, 13) = }") print(f"Parallelogram: {area_parallelogram(10, 20) = }") print(f"Rhombus: {area_rhombus(10, 20) = }") print(f"Trapezium: {area_trapezium(10, 20, 30) = }") print(f"Circle: {area_circle(20) = }") print(f"Ellipse: {area_ellipse(10, 20) = }") print("\nSurface Areas of various geometric shapes: \n") print(f"Cube: {surface_area_cube(20) = }") print(f"Cuboid: {surface_area_cuboid(10, 20, 30) = }") print(f"Sphere: {surface_area_sphere(20) = }") print(f"Hemisphere: {surface_area_hemisphere(20) = }") print(f"Cone: {surface_area_cone(10, 20) = }") print(f"Conical Frustum: {surface_area_conical_frustum(10, 20, 30) = }") print(f"Cylinder: {surface_area_cylinder(10, 20) = }") print(f"Torus: {surface_area_torus(20, 10) = }") print(f"Equilateral Triangle: {area_reg_polygon(3, 10) = }") print(f"Square: {area_reg_polygon(4, 10) = }") print(f"Reqular Pentagon: {area_reg_polygon(5, 10) = }")
approximates the area under the curve using the trapezoidal rule treats curve as a collection of linear lines and sums the area of the trapezium shape they form param fnc a function which defines a curve param xstart left end point to indicate the start of line segment param xend right end point to indicate end of line segment param steps an accuracy gauge more steps increases the accuracy return a float representing the length of the curve def fx return 5 ftrapezoidalareaf 12 0 14 0 1000 3f 10 000 def fx return 9x2 ftrapezoidalareaf 4 0 0 10000 4f 192 0000 ftrapezoidalareaf 4 0 4 0 10000 4f 384 0000 approximates small segments of curve as linear and solve for trapezoidal area increment step treats curve as a collection of linear lines and sums the area of the trapezium shape they form param fnc a function which defines a curve param x_start left end point to indicate the start of line segment param x_end right end point to indicate end of line segment param steps an accuracy gauge more steps increases the accuracy return a float representing the length of the curve def f x return 5 f trapezoidal_area f 12 0 14 0 1000 3f 10 000 def f x return 9 x 2 f trapezoidal_area f 4 0 0 10000 4f 192 0000 f trapezoidal_area f 4 0 4 0 10000 4f 384 0000 approximates small segments of curve as linear and solve for trapezoidal area increment step
from __future__ import annotations from collections.abc import Callable def trapezoidal_area( fnc: Callable[[float], float], x_start: float, x_end: float, steps: int = 100, ) -> float: x1 = x_start fx1 = fnc(x_start) area = 0.0 for _ in range(steps): x2 = (x_end - x_start) / steps + x1 fx2 = fnc(x2) area += abs(fx2 + fx1) * (x2 - x1) / 2 x1 = x2 fx1 = fx2 return area if __name__ == "__main__": def f(x): return x**3 + x**2 print("f(x) = x^3 + x^2") print("The area between the curve, x = -5, x = 5 and the x axis is:") i = 10 while i <= 100000: print(f"with {i} steps: {trapezoidal_area(f, -5, 5, i)}") i *= 10
return the average absolute deviation of a list of numbers wiki https en wikipedia orgwikiaverageabsolutedeviation averageabsolutedeviation0 0 0 averageabsolutedeviation4 1 3 2 1 0 averageabsolutedeviation2 70 6 50 20 8 4 0 20 0 averageabsolutedeviation20 0 30 15 16 25 averageabsolutedeviation traceback most recent call last valueerror list is empty return the average absolute deviation of a list of numbers wiki https en wikipedia org wiki average_absolute_deviation average_absolute_deviation 0 0 0 average_absolute_deviation 4 1 3 2 1 0 average_absolute_deviation 2 70 6 50 20 8 4 0 20 0 average_absolute_deviation 20 0 30 15 16 25 average_absolute_deviation traceback most recent call last valueerror list is empty makes sure that the list is not empty calculate the average
def average_absolute_deviation(nums: list[int]) -> float: if not nums: raise ValueError("List is empty") average = sum(nums) / len(nums) return sum(abs(x - average) for x in nums) / len(nums) if __name__ == "__main__": import doctest doctest.testmod()
find mean of a list of numbers wiki https en wikipedia orgwikimean mean3 6 9 12 15 18 21 12 0 mean5 10 15 20 25 30 35 20 0 mean1 2 3 4 5 6 7 8 4 5 mean traceback most recent call last valueerror list is empty find mean of a list of numbers wiki https en wikipedia org wiki mean mean 3 6 9 12 15 18 21 12 0 mean 5 10 15 20 25 30 35 20 0 mean 1 2 3 4 5 6 7 8 4 5 mean traceback most recent call last valueerror list is empty
from __future__ import annotations def mean(nums: list) -> float: if not nums: raise ValueError("List is empty") return sum(nums) / len(nums) if __name__ == "__main__": import doctest doctest.testmod()
find median of a list of numbers wiki https en wikipedia orgwikimedian median0 0 median4 1 3 2 2 5 median2 70 6 50 20 8 4 8 args nums list of nums returns median the sorted function returns listsupportsrichcomparisontsorted which does not support find median of a list of numbers wiki https en wikipedia org wiki median median 0 0 median 4 1 3 2 2 5 median 2 70 6 50 20 8 4 8 args nums list of nums returns median the sorted function returns list supportsrichcomparisont sorted which does not support
from __future__ import annotations def median(nums: list) -> int | float: sorted_list: list[int] = sorted(nums) length = len(sorted_list) mid_index = length >> 1 return ( (sorted_list[mid_index] + sorted_list[mid_index - 1]) / 2 if length % 2 == 0 else sorted_list[mid_index] ) def main(): import doctest doctest.testmod() if __name__ == "__main__": main()
this function returns the modemode as in the measures of central tendency of the input data the input list may contain any datastructure or any datatype mode2 3 4 5 3 4 2 5 2 2 4 2 2 2 2 mode3 4 5 3 4 2 5 2 2 4 4 2 2 2 2 mode3 4 5 3 4 2 5 2 2 4 4 4 2 2 4 2 2 4 modex y y z y modex x y y z x y gets values of modes this function returns the mode mode as in the measures of central tendency of the input data the input list may contain any datastructure or any datatype mode 2 3 4 5 3 4 2 5 2 2 4 2 2 2 2 mode 3 4 5 3 4 2 5 2 2 4 4 2 2 2 2 mode 3 4 5 3 4 2 5 2 2 4 4 4 2 2 4 2 2 4 mode x y y z y mode x x y y z x y gets the maximum count in the input list gets values of modes
from typing import Any def mode(input_list: list) -> list[Any]: if not input_list: return [] result = [input_list.count(value) for value in input_list] y = max(result) return sorted({input_list[i] for i, value in enumerate(result) if value == y}) if __name__ == "__main__": import doctest doctest.testmod()
implement a popular pidigitextraction algorithm known as the baileyborweinplouffe bbp formula to calculate the nth hex digit of pi wikipedia page https en wikipedia orgwikibaileye28093borweine28093plouffeformula param digitposition a positive integer representing the position of the digit to extract the digit immediately after the decimal point is located at position 1 param precision number of terms in the second summation to calculate a higher number reduces the chance of an error but increases the runtime return a hexadecimal digit representing the digit at the nth position in pi s decimal expansion joinbaileyborweinplouffei for i in range1 11 243f6a8885 baileyborweinplouffe5 10000 6 baileyborweinplouffe10 traceback most recent call last valueerror digit position must be a positive integer baileyborweinplouffe0 traceback most recent call last valueerror digit position must be a positive integer baileyborweinplouffe1 7 traceback most recent call last valueerror digit position must be a positive integer baileyborweinplouffe2 10 traceback most recent call last valueerror precision must be a nonnegative integer baileyborweinplouffe2 1 6 traceback most recent call last valueerror precision must be a nonnegative integer compute an approximation of 16 n 1 pi whose fractional part is mostly accurate return the first hex digit of the fractional part of the result only care about first digit of fractional part don t need decimal private helper function to implement the summation functionality param digitpostoextract digit position to extract param denominatoraddend added to denominator of fractions in the formula param precision same as precision in main function return floatingpoint number whose integer part is not important if the exponential term is an integer and we mod it by the denominator before dividing only the integer part of the sum will change the fractional part will not implement a popular pi digit extraction algorithm known as the bailey borwein plouffe bbp formula to calculate the nth hex digit of pi wikipedia page https en wikipedia org wiki bailey e2 80 93borwein e2 80 93plouffe_formula param digit_position a positive integer representing the position of the digit to extract the digit immediately after the decimal point is located at position 1 param precision number of terms in the second summation to calculate a higher number reduces the chance of an error but increases the runtime return a hexadecimal digit representing the digit at the nth position in pi s decimal expansion join bailey_borwein_plouffe i for i in range 1 11 243f6a8885 bailey_borwein_plouffe 5 10000 6 bailey_borwein_plouffe 10 traceback most recent call last valueerror digit position must be a positive integer bailey_borwein_plouffe 0 traceback most recent call last valueerror digit position must be a positive integer bailey_borwein_plouffe 1 7 traceback most recent call last valueerror digit position must be a positive integer bailey_borwein_plouffe 2 10 traceback most recent call last valueerror precision must be a nonnegative integer bailey_borwein_plouffe 2 1 6 traceback most recent call last valueerror precision must be a nonnegative integer compute an approximation of 16 n 1 pi whose fractional part is mostly accurate return the first hex digit of the fractional part of the result only care about first digit of fractional part don t need decimal private helper function to implement the summation functionality param digit_pos_to_extract digit position to extract param denominator_addend added to denominator of fractions in the formula param precision same as precision in main function return floating point number whose integer part is not important if the exponential term is an integer and we mod it by the denominator before dividing only the integer part of the sum will change the fractional part will not
def bailey_borwein_plouffe(digit_position: int, precision: int = 1000) -> str: if (not isinstance(digit_position, int)) or (digit_position <= 0): raise ValueError("Digit position must be a positive integer") elif (not isinstance(precision, int)) or (precision < 0): raise ValueError("Precision must be a nonnegative integer") sum_result = ( 4 * _subsum(digit_position, 1, precision) - 2 * _subsum(digit_position, 4, precision) - _subsum(digit_position, 5, precision) - _subsum(digit_position, 6, precision) ) return hex(int((sum_result % 1) * 16))[2:] def _subsum( digit_pos_to_extract: int, denominator_addend: int, precision: int ) -> float: total = 0.0 for sum_index in range(digit_pos_to_extract + precision): denominator = 8 * sum_index + denominator_addend if sum_index < digit_pos_to_extract: exponential_term = pow( 16, digit_pos_to_extract - 1 - sum_index, denominator ) else: exponential_term = pow(16, digit_pos_to_extract - 1 - sum_index) total += exponential_term / denominator return total if __name__ == "__main__": import doctest doctest.testmod()
this function returns the number negative base 2 of the decimal number of the input data args int the decimal number to convert returns int the negative base 2 number examples decimaltonegativebase20 0 decimaltonegativebase219 111101 decimaltonegativebase24 100 decimaltonegativebase27 11011 this function returns the number negative base 2 of the decimal number of the input data args int the decimal number to convert returns int the negative base 2 number examples decimal_to_negative_base_2 0 0 decimal_to_negative_base_2 19 111101 decimal_to_negative_base_2 4 100 decimal_to_negative_base_2 7 11011
def decimal_to_negative_base_2(num: int) -> int: if num == 0: return 0 ans = "" while num != 0: num, rem = divmod(num, -2) if rem < 0: rem += 2 num += 1 ans = str(rem) + ans return int(ans) if __name__ == "__main__": import doctest doctest.testmod()
implementation of basic math in python import math def primefactorsn int list if n 0 raise valueerroronly positive integers have prime factors pf while n 2 0 pf append2 n intn 2 for i in range3 intmath sqrtn 1 2 while n i 0 pf appendi n intn i if n 2 pf appendn return pf def numberofdivisorsn int int if n 0 raise valueerroronly positive numbers are accepted div 1 temp 1 while n 2 0 temp 1 n intn 2 div temp for i in range3 intmath sqrtn 1 2 temp 1 while n i 0 temp 1 n intn i div temp if n 1 div 2 return div def sumofdivisorsn int int if n 0 raise valueerroronly positive numbers are accepted s 1 temp 1 while n 2 0 temp 1 n intn 2 if temp 1 s 2temp 1 2 1 for i in range3 intmath sqrtn 1 2 temp 1 while n i 0 temp 1 n intn i if temp 1 s itemp 1 i 1 return ints def eulerphin int int if n 0 raise valueerroronly positive numbers are accepted s n for x in setprimefactorsn s x 1 x return ints if name main import doctest doctest testmod find prime factors prime_factors 100 2 2 5 5 prime_factors 0 traceback most recent call last valueerror only positive integers have prime factors prime_factors 10 traceback most recent call last valueerror only positive integers have prime factors calculate number of divisors of an integer number_of_divisors 100 9 number_of_divisors 0 traceback most recent call last valueerror only positive numbers are accepted number_of_divisors 10 traceback most recent call last valueerror only positive numbers are accepted calculate sum of divisors sum_of_divisors 100 217 sum_of_divisors 0 traceback most recent call last valueerror only positive numbers are accepted sum_of_divisors 10 traceback most recent call last valueerror only positive numbers are accepted calculate euler s phi function euler_phi 100 40 euler_phi 0 traceback most recent call last valueerror only positive numbers are accepted euler_phi 10 traceback most recent call last valueerror only positive numbers are accepted
import math def prime_factors(n: int) -> list: if n <= 0: raise ValueError("Only positive integers have prime factors") pf = [] while n % 2 == 0: pf.append(2) n = int(n / 2) for i in range(3, int(math.sqrt(n)) + 1, 2): while n % i == 0: pf.append(i) n = int(n / i) if n > 2: pf.append(n) return pf def number_of_divisors(n: int) -> int: if n <= 0: raise ValueError("Only positive numbers are accepted") div = 1 temp = 1 while n % 2 == 0: temp += 1 n = int(n / 2) div *= temp for i in range(3, int(math.sqrt(n)) + 1, 2): temp = 1 while n % i == 0: temp += 1 n = int(n / i) div *= temp if n > 1: div *= 2 return div def sum_of_divisors(n: int) -> int: if n <= 0: raise ValueError("Only positive numbers are accepted") s = 1 temp = 1 while n % 2 == 0: temp += 1 n = int(n / 2) if temp > 1: s *= (2**temp - 1) / (2 - 1) for i in range(3, int(math.sqrt(n)) + 1, 2): temp = 1 while n % i == 0: temp += 1 n = int(n / i) if temp > 1: s *= (i**temp - 1) / (i - 1) return int(s) def euler_phi(n: int) -> int: if n <= 0: raise ValueError("Only positive numbers are accepted") s = n for x in set(prime_factors(n)): s *= (x - 1) / x return int(s) if __name__ == "__main__": import doctest doctest.testmod()
binary exponentiation this is a method to find ab in olog b time complexity and is one of the most commonly used methods of exponentiation the method is also useful for modular exponentiation when the solution to ab c is required to calculate ab if b is even then ab a ab 2 if b is odd then ab a ab 1 repeat until b 1 or b 0 for modular exponentiation we use the fact that a b c a c b c c computes ab recursively where a is the base and b is the exponent binaryexprecursive3 5 243 binaryexprecursive11 13 34522712143931 binaryexprecursive1 3 1 binaryexprecursive0 5 0 binaryexprecursive3 1 3 binaryexprecursive3 0 1 binaryexprecursive1 5 4 5 0625 binaryexprecursive3 1 traceback most recent call last valueerror exponent must be a nonnegative integer computes ab iteratively where a is the base and b is the exponent binaryexpiterative3 5 243 binaryexpiterative11 13 34522712143931 binaryexpiterative1 3 1 binaryexpiterative0 5 0 binaryexpiterative3 1 3 binaryexpiterative3 0 1 binaryexpiterative1 5 4 5 0625 binaryexpiterative3 1 traceback most recent call last valueerror exponent must be a nonnegative integer computes ab c recursively where a is the base b is the exponent and c is the modulus binaryexpmodrecursive3 4 5 1 binaryexpmodrecursive11 13 7 4 binaryexpmodrecursive1 5 4 3 2 0625 binaryexpmodrecursive7 1 10 traceback most recent call last valueerror exponent must be a nonnegative integer binaryexpmodrecursive7 13 0 traceback most recent call last valueerror modulus must be a positive integer computes ab c iteratively where a is the base b is the exponent and c is the modulus binaryexpmoditerative3 4 5 1 binaryexpmoditerative11 13 7 4 binaryexpmoditerative1 5 4 3 2 0625 binaryexpmoditerative7 1 10 traceback most recent call last valueerror exponent must be a nonnegative integer binaryexpmoditerative7 13 0 traceback most recent call last valueerror modulus must be a positive integer computes a b recursively where a is the base and b is the exponent binary_exp_recursive 3 5 243 binary_exp_recursive 11 13 34522712143931 binary_exp_recursive 1 3 1 binary_exp_recursive 0 5 0 binary_exp_recursive 3 1 3 binary_exp_recursive 3 0 1 binary_exp_recursive 1 5 4 5 0625 binary_exp_recursive 3 1 traceback most recent call last valueerror exponent must be a non negative integer computes a b iteratively where a is the base and b is the exponent binary_exp_iterative 3 5 243 binary_exp_iterative 11 13 34522712143931 binary_exp_iterative 1 3 1 binary_exp_iterative 0 5 0 binary_exp_iterative 3 1 3 binary_exp_iterative 3 0 1 binary_exp_iterative 1 5 4 5 0625 binary_exp_iterative 3 1 traceback most recent call last valueerror exponent must be a non negative integer computes a b c recursively where a is the base b is the exponent and c is the modulus binary_exp_mod_recursive 3 4 5 1 binary_exp_mod_recursive 11 13 7 4 binary_exp_mod_recursive 1 5 4 3 2 0625 binary_exp_mod_recursive 7 1 10 traceback most recent call last valueerror exponent must be a non negative integer binary_exp_mod_recursive 7 13 0 traceback most recent call last valueerror modulus must be a positive integer computes a b c iteratively where a is the base b is the exponent and c is the modulus binary_exp_mod_iterative 3 4 5 1 binary_exp_mod_iterative 11 13 7 4 binary_exp_mod_iterative 1 5 4 3 2 0625 binary_exp_mod_iterative 7 1 10 traceback most recent call last valueerror exponent must be a non negative integer binary_exp_mod_iterative 7 13 0 traceback most recent call last valueerror modulus must be a positive integer
def binary_exp_recursive(base: float, exponent: int) -> float: if exponent < 0: raise ValueError("Exponent must be a non-negative integer") if exponent == 0: return 1 if exponent % 2 == 1: return binary_exp_recursive(base, exponent - 1) * base b = binary_exp_recursive(base, exponent // 2) return b * b def binary_exp_iterative(base: float, exponent: int) -> float: if exponent < 0: raise ValueError("Exponent must be a non-negative integer") res: int | float = 1 while exponent > 0: if exponent & 1: res *= base base *= base exponent >>= 1 return res def binary_exp_mod_recursive(base: float, exponent: int, modulus: int) -> float: if exponent < 0: raise ValueError("Exponent must be a non-negative integer") if modulus <= 0: raise ValueError("Modulus must be a positive integer") if exponent == 0: return 1 if exponent % 2 == 1: return (binary_exp_mod_recursive(base, exponent - 1, modulus) * base) % modulus r = binary_exp_mod_recursive(base, exponent // 2, modulus) return (r * r) % modulus def binary_exp_mod_iterative(base: float, exponent: int, modulus: int) -> float: if exponent < 0: raise ValueError("Exponent must be a non-negative integer") if modulus <= 0: raise ValueError("Modulus must be a positive integer") res: int | float = 1 while exponent > 0: if exponent & 1: res = ((res % modulus) * (base % modulus)) % modulus base *= base exponent >>= 1 return res if __name__ == "__main__": from timeit import timeit a = 1269380576 b = 374 c = 34 runs = 100_000 print( timeit( f"binary_exp_recursive({a}, {b})", setup="from __main__ import binary_exp_recursive", number=runs, ) ) print( timeit( f"binary_exp_iterative({a}, {b})", setup="from __main__ import binary_exp_iterative", number=runs, ) ) print( timeit( f"binary_exp_mod_recursive({a}, {b}, {c})", setup="from __main__ import binary_exp_mod_recursive", number=runs, ) ) print( timeit( f"binary_exp_mod_iterative({a}, {b}, {c})", setup="from __main__ import binary_exp_mod_iterative", number=runs, ) )
binary multiplication this is a method to find ab in a time complexity of olog b this is one of the most commonly used methods of finding result of multiplication also useful in cases where solution to abc is required where a b c can be numbers over the computers calculation limits done using iteration can also be done using recursion let s say you need to calculate a b rule 1 a b aa b2 example 4 4 44 42 8 2 rule 2 if b is odd then a b a a b 1 where b 1 is even once b is even repeat the process to get a b repeat the process until b 1 or b 0 because a1 a and a0 0 as far as the modulo is concerned the fact ab c ac bc c now apply rule 1 or 2 whichever is required chinmoy159 multiply a and b using bitwise multiplication parameters a int the first number b int the second number returns int a b examples binarymultiply2 3 6 binarymultiply5 0 0 binarymultiply3 4 12 binarymultiply10 5 50 binarymultiply0 5 0 binarymultiply2 1 2 binarymultiply1 10 10 calculate a b c using binary multiplication and modular arithmetic parameters a int the first number b int the second number modulus int the modulus returns int a b modulus examples binarymodmultiply2 3 5 1 binarymodmultiply5 0 7 0 binarymodmultiply3 4 6 0 binarymodmultiply10 5 13 11 binarymodmultiply2 1 5 2 binarymodmultiply1 10 3 1 multiply a and b using bitwise multiplication parameters a int the first number b int the second number returns int a b examples binary_multiply 2 3 6 binary_multiply 5 0 0 binary_multiply 3 4 12 binary_multiply 10 5 50 binary_multiply 0 5 0 binary_multiply 2 1 2 binary_multiply 1 10 10 calculate a b c using binary multiplication and modular arithmetic parameters a int the first number b int the second number modulus int the modulus returns int a b modulus examples binary_mod_multiply 2 3 5 1 binary_mod_multiply 5 0 7 0 binary_mod_multiply 3 4 6 0 binary_mod_multiply 10 5 13 11 binary_mod_multiply 2 1 5 2 binary_mod_multiply 1 10 3 1
def binary_multiply(a: int, b: int) -> int: res = 0 while b > 0: if b & 1: res += a a += a b >>= 1 return res def binary_mod_multiply(a: int, b: int, modulus: int) -> int: res = 0 while b > 0: if b & 1: res = ((res % modulus) + (a % modulus)) % modulus a += a b >>= 1 return res if __name__ == "__main__": import doctest doctest.testmod()
find binomial coefficient using pascal s triangle calculate cn r using pascal s triangle param n the total number of items param r the number of items to choose return the binomial coefficient cn r binomialcoefficient10 5 252 binomialcoefficient10 0 1 binomialcoefficient0 10 1 binomialcoefficient10 10 1 binomialcoefficient5 2 10 binomialcoefficient5 6 0 binomialcoefficient3 5 0 binomialcoefficient2 3 traceback most recent call last valueerror n and r must be nonnegative integers binomialcoefficient5 1 traceback most recent call last valueerror n and r must be nonnegative integers binomialcoefficient10 1 5 traceback most recent call last typeerror float object cannot be interpreted as an integer binomialcoefficient10 5 1 traceback most recent call last typeerror float object cannot be interpreted as an integer nc0 1 to compute current row from previous row find binomial coefficient using pascal s triangle calculate c n r using pascal s triangle param n the total number of items param r the number of items to choose return the binomial coefficient c n r binomial_coefficient 10 5 252 binomial_coefficient 10 0 1 binomial_coefficient 0 10 1 binomial_coefficient 10 10 1 binomial_coefficient 5 2 10 binomial_coefficient 5 6 0 binomial_coefficient 3 5 0 binomial_coefficient 2 3 traceback most recent call last valueerror n and r must be non negative integers binomial_coefficient 5 1 traceback most recent call last valueerror n and r must be non negative integers binomial_coefficient 10 1 5 traceback most recent call last typeerror float object cannot be interpreted as an integer binomial_coefficient 10 5 1 traceback most recent call last typeerror float object cannot be interpreted as an integer nc0 1 to compute current row from previous row
def binomial_coefficient(n: int, r: int) -> int: if n < 0 or r < 0: raise ValueError("n and r must be non-negative integers") if 0 in (n, r): return 1 c = [0 for i in range(r + 1)] c[0] = 1 for i in range(1, n + 1): j = min(i, r) while j > 0: c[j] += c[j - 1] j -= 1 return c[r] if __name__ == "__main__": from doctest import testmod testmod() print(binomial_coefficient(n=10, r=5))
for more information about the binomial distribution https en wikipedia orgwikibinomialdistribution from math import factorial def binomialdistributionsuccesses int trials int prob float float if successes trials raise valueerrorsuccesses must be lower or equal to trials if trials 0 or successes 0 raise valueerrorthe function is defined for nonnegative integers if not isinstancesuccesses int or not isinstancetrials int raise valueerrorthe function is defined for nonnegative integers if not 0 prob 1 raise valueerrorprob has to be in range of 1 0 probability probsuccesses 1 prob trials successes calculate the binomial coefficient n k nk coefficient floatfactorialtrials coefficient factorialsuccesses factorialtrials successes return probability coefficient if name main from doctest import testmod testmod printprobability of 2 successes out of 4 trails printwith probability of 0 75 is end printbinomialdistribution2 4 0 75 return probability of k successes out of n tries with p probability for one success the function uses the factorial function in order to calculate the binomial coefficient binomial_distribution 3 5 0 7 0 30870000000000003 binomial_distribution 2 4 0 5 0 375 successes must be lower or equal to trials calculate the binomial coefficient n k n k
from math import factorial def binomial_distribution(successes: int, trials: int, prob: float) -> float: if successes > trials: raise ValueError() if trials < 0 or successes < 0: raise ValueError("the function is defined for non-negative integers") if not isinstance(successes, int) or not isinstance(trials, int): raise ValueError("the function is defined for non-negative integers") if not 0 < prob < 1: raise ValueError("prob has to be in range of 1 - 0") probability = (prob**successes) * ((1 - prob) ** (trials - successes)) coefficient = float(factorial(trials)) coefficient /= factorial(successes) * factorial(trials - successes) return probability * coefficient if __name__ == "__main__": from doctest import testmod testmod() print("Probability of 2 successes out of 4 trails") print("with probability of 0.75 is:", end=" ") print(binomial_distribution(2, 4, 0.75))
https en wikipedia orgwikifloorandceilingfunctions return the ceiling of x as an integral param x the number return the smallest integer x import math allceiln math ceiln for n in 1 1 0 0 1 1 1 1 1 0 1 0 1000000000 true return the ceiling of x as an integral param x the number return the smallest integer x import math all ceil n math ceil n for n in 1 1 0 0 1 1 1 1 1 0 1 0 1_000_000_000 true
def ceil(x: float) -> int: return int(x) if x - int(x) <= 0 else int(x) + 1 if __name__ == "__main__": import doctest doctest.testmod()
this function calculates the chebyshev distance also known as the chessboard distance between two ndimensional points represented as lists https en wikipedia orgwikichebyshevdistance chebyshevdistance1 0 1 0 2 0 2 0 1 0 chebyshevdistance1 0 1 0 9 0 2 0 2 0 5 2 14 2 chebyshevdistance1 0 2 0 2 0 traceback most recent call last valueerror both points must have the same dimension this function calculates the chebyshev distance also known as the chessboard distance between two n dimensional points represented as lists https en wikipedia org wiki chebyshev_distance chebyshev_distance 1 0 1 0 2 0 2 0 1 0 chebyshev_distance 1 0 1 0 9 0 2 0 2 0 5 2 14 2 chebyshev_distance 1 0 2 0 2 0 traceback most recent call last valueerror both points must have the same dimension
def chebyshev_distance(point_a: list[float], point_b: list[float]) -> float: if len(point_a) != len(point_b): raise ValueError("Both points must have the same dimension.") return max(abs(a - b) for a, b in zip(point_a, point_b))
takes list of possible side lengths and determines whether a twodimensional polygon with such side lengths can exist returns a boolean value for the comparison of the largest side length with sum of the rest wiki https en wikipedia orgwikitriangleinequality checkpolygon6 10 5 true checkpolygon3 7 13 2 false checkpolygon1 4 3 5 2 12 2 false nums 3 7 13 2 checkpolygonnums run function do not show answer in output nums check numbers are not reordered 3 7 13 2 checkpolygon traceback most recent call last valueerror monogons and digons are not polygons in the euclidean space checkpolygon2 5 6 traceback most recent call last valueerror all values must be greater than 0 takes list of possible side lengths and determines whether a two dimensional polygon with such side lengths can exist returns a boolean value for the comparison of the largest side length with sum of the rest wiki https en wikipedia org wiki triangle_inequality check_polygon 6 10 5 true check_polygon 3 7 13 2 false check_polygon 1 4 3 5 2 12 2 false nums 3 7 13 2 _ check_polygon nums run function do not show answer in output nums check numbers are not reordered 3 7 13 2 check_polygon traceback most recent call last valueerror monogons and digons are not polygons in the euclidean space check_polygon 2 5 6 traceback most recent call last valueerror all values must be greater than 0
from __future__ import annotations def check_polygon(nums: list[float]) -> bool: if len(nums) < 2: raise ValueError("Monogons and Digons are not polygons in the Euclidean space") if any(i <= 0 for i in nums): raise ValueError("All values must be greater than 0") copy_nums = nums.copy() copy_nums.sort() return copy_nums[-1] < sum(copy_nums[:-1]) if __name__ == "__main__": import doctest doctest.testmod()
chinese remainder theorem gcd greatest common divisor or hcf highest common factor if gcda b 1 then for any remainder ra modulo a and any remainder rb modulo b there exists integer n such that n ra mod a and n ramod b if n1 and n2 are two such integers then n1n2mod ab algorithm 1 use extended euclid algorithm to find x y such that ax by 1 2 take n raby rbax extended euclid extendedeuclid10 6 1 2 extendedeuclid7 5 2 3 uses extendedeuclid to find inverses chineseremaindertheorem5 1 7 3 31 explanation 31 is the smallest number such that i when we divide it by 5 we get remainder 1 ii when we divide it by 7 we get remainder 3 chineseremaindertheorem6 1 4 3 14 same solution using invertmodulo instead extendedeuclid this function find the inverses of a i e a1 invertmodulo2 5 3 invertmodulo8 7 1 same a above using invertingmodulo chineseremaindertheorem25 1 7 3 31 chineseremaindertheorem26 1 4 3 14 extended euclid extended_euclid 10 6 1 2 extended_euclid 7 5 2 3 uses extendedeuclid to find inverses chinese_remainder_theorem 5 1 7 3 31 explanation 31 is the smallest number such that i when we divide it by 5 we get remainder 1 ii when we divide it by 7 we get remainder 3 chinese_remainder_theorem 6 1 4 3 14 same solution using invertmodulo instead extendedeuclid this function find the inverses of a i e a 1 invert_modulo 2 5 3 invert_modulo 8 7 1 same a above using invertingmodulo chinese_remainder_theorem2 5 1 7 3 31 chinese_remainder_theorem2 6 1 4 3 14
from __future__ import annotations def extended_euclid(a: int, b: int) -> tuple[int, int]: if b == 0: return (1, 0) (x, y) = extended_euclid(b, a % b) k = a // b return (y, x - k * y) def chinese_remainder_theorem(n1: int, r1: int, n2: int, r2: int) -> int: (x, y) = extended_euclid(n1, n2) m = n1 * n2 n = r2 * x * n1 + r1 * y * n2 return (n % m + m) % m def invert_modulo(a: int, n: int) -> int: (b, x) = extended_euclid(a, n) if b < 0: b = (b % n + n) % n return b def chinese_remainder_theorem2(n1: int, r1: int, n2: int, r2: int) -> int: x, y = invert_modulo(n1, n2), invert_modulo(n2, n1) m = n1 * n2 n = r2 * x * n1 + r1 * y * n2 return (n % m + m) % m if __name__ == "__main__": from doctest import testmod testmod(name="chinese_remainder_theorem", verbose=True) testmod(name="chinese_remainder_theorem2", verbose=True) testmod(name="invert_modulo", verbose=True) testmod(name="extended_euclid", verbose=True)
the chudnovsky algorithm is a fast method for calculating the digits of pi based on ramanujans pi formulae https en wikipedia orgwikichudnovskyalgorithm pi constantterm multinomialterm linearterm exponentialterm where constantterm 426880 sqrt10005 the linearterm and the exponentialterm can be defined iteratively as follows lk1 lk 545140134 where l0 13591409 xk1 xk 262537412640768000 where x0 1 the multinomialterm is defined as follows 6k 3k k 3 where k is the kth iteration this algorithm correctly calculates around 14 digits of pi per iteration pi10 3 14159265 pi100 3 14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706 pi hello traceback most recent call last typeerror undefined for nonintegers pi1 traceback most recent call last valueerror undefined for nonnatural numbers the chudnovsky algorithm is a fast method for calculating the digits of pi based on ramanujan s pi formulae https en wikipedia org wiki chudnovsky_algorithm pi constant_term multinomial_term linear_term exponential_term where constant_term 426880 sqrt 10005 the linear_term and the exponential_term can be defined iteratively as follows l_k 1 l_k 545140134 where l_0 13591409 x_k 1 x_k 262537412640768000 where x_0 1 the multinomial_term is defined as follows 6k 3k k 3 where k is the k_th iteration this algorithm correctly calculates around 14 digits of pi per iteration pi 10 3 14159265 pi 100 3 14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706 pi hello traceback most recent call last typeerror undefined for non integers pi 1 traceback most recent call last valueerror undefined for non natural numbers
from decimal import Decimal, getcontext from math import ceil, factorial def pi(precision: int) -> str: if not isinstance(precision, int): raise TypeError("Undefined for non-integers") elif precision < 1: raise ValueError("Undefined for non-natural numbers") getcontext().prec = precision num_iterations = ceil(precision / 14) constant_term = 426880 * Decimal(10005).sqrt() exponential_term = 1 linear_term = 13591409 partial_sum = Decimal(linear_term) for k in range(1, num_iterations): multinomial_term = factorial(6 * k) // (factorial(3 * k) * factorial(k) ** 3) linear_term += 545140134 exponential_term *= -262537412640768000 partial_sum += Decimal(multinomial_term * linear_term) / exponential_term return str(constant_term / partial_sum)[:-1] if __name__ == "__main__": n = 50 print(f"The first {n} digits of pi is: {pi(n)}")
the collatz conjecture is a famous unsolved problem in mathematics given a starting positive integer define the following sequence if the current term n is even then the next term is n2 if the current term n is odd then the next term is 3n 1 the conjecture claims that this sequence will always reach 1 for any starting number other names for this problem include the 3n 1 problem the ulam conjecture kakutani s problem the thwaites conjecture hasse s algorithm the syracuse problem and the hailstone sequence reference https en wikipedia orgwikicollatzconjecture generate the collatz sequence starting at n tuplecollatzsequence2 1 traceback most recent call last exception sequence only defined for positive integers tuplecollatzsequence0 traceback most recent call last exception sequence only defined for positive integers tuplecollatzsequence4 4 2 1 tuplecollatzsequence11 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1 tuplecollatzsequence31 doctest normalizewhitespace 31 94 47 142 71 214 107 322 161 484 242 121 364 182 91 274 137 412 206 103 310 155 466 233 700 350 175 526 263 790 395 1186 593 1780 890 445 1336 668 334 167 502 251 754 377 1132 566 283 850 425 1276 638 319 958 479 1438 719 2158 1079 3238 1619 4858 2429 7288 3644 1822 911 2734 1367 4102 2051 6154 3077 9232 4616 2308 1154 577 1732 866 433 1300 650 325 976 488 244 122 61 184 92 46 23 70 35 106 53 160 80 40 20 10 5 16 8 4 2 1 tuplecollatzsequence43 doctest normalizewhitespace 43 130 65 196 98 49 148 74 37 112 56 28 14 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1 generate the collatz sequence starting at n tuple collatz_sequence 2 1 traceback most recent call last exception sequence only defined for positive integers tuple collatz_sequence 0 traceback most recent call last exception sequence only defined for positive integers tuple collatz_sequence 4 4 2 1 tuple collatz_sequence 11 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1 tuple collatz_sequence 31 doctest normalize_whitespace 31 94 47 142 71 214 107 322 161 484 242 121 364 182 91 274 137 412 206 103 310 155 466 233 700 350 175 526 263 790 395 1186 593 1780 890 445 1336 668 334 167 502 251 754 377 1132 566 283 850 425 1276 638 319 958 479 1438 719 2158 1079 3238 1619 4858 2429 7288 3644 1822 911 2734 1367 4102 2051 6154 3077 9232 4616 2308 1154 577 1732 866 433 1300 650 325 976 488 244 122 61 184 92 46 23 70 35 106 53 160 80 40 20 10 5 16 8 4 2 1 tuple collatz_sequence 43 doctest normalize_whitespace 43 130 65 196 98 49 148 74 37 112 56 28 14 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1
from __future__ import annotations from collections.abc import Generator def collatz_sequence(n: int) -> Generator[int, None, None]: if not isinstance(n, int) or n < 1: raise Exception("Sequence only defined for positive integers") yield n while n != 1: if n % 2 == 0: n //= 2 else: n = 3 * n + 1 yield n def main(): n = int(input("Your number: ")) sequence = tuple(collatz_sequence(n)) print(sequence) print(f"Collatz sequence from {n} took {len(sequence)} steps.") if __name__ == "__main__": main()
https en wikipedia orgwikicombination returns the number of different combinations of k length which can be made from n values where n k examples combinations10 5 252 combinations6 3 20 combinations20 5 15504 combinations52 5 2598960 combinations0 0 1 combinations4 5 traceback most recent call last valueerror please enter positive integers for n and k where n k if either of the conditions are true the function is being asked to calculate a factorial of a negative number which is not possible returns the number of different combinations of k length which can be made from n values where n k examples combinations 10 5 252 combinations 6 3 20 combinations 20 5 15504 combinations 52 5 2598960 combinations 0 0 1 combinations 4 5 traceback most recent call last valueerror please enter positive integers for n and k where n k if either of the conditions are true the function is being asked to calculate a factorial of a negative number which is not possible
def combinations(n: int, k: int) -> int: if n < k or k < 0: raise ValueError("Please enter positive integers for n and k where n >= k") res = 1 for i in range(k): res *= n - i res //= i + 1 return res if __name__ == "__main__": print( "The number of five-card hands possible from a standard", f"fifty-two card deck is: {combinations(52, 5)}\n", ) print( "If a class of 40 students must be arranged into groups of", f"4 for group projects, there are {combinations(40, 4)} ways", "to arrange them.\n", ) print( "If 10 teams are competing in a Formula One race, there", f"are {combinations(10, 3)} ways that first, second and", "third place can be awarded.", )
finding the continuous fraction for a rational number using python https en wikipedia orgwikicontinuedfraction param num fraction of the number whose continued fractions to be found use fractionstrnumber for more accurate results due to float inaccuracies return the continued fraction of rational number it is the all commas in the n 1tuple notation continuedfractionfraction2 2 continuedfractionfraction3 245 3 4 12 4 continuedfractionfraction2 25 2 4 continuedfraction1fraction2 25 0 2 4 continuedfractionfraction41593 4 2 6 7 continuedfractionfraction0 0 continuedfractionfraction0 75 0 1 3 continuedfractionfraction2 25 2 25 3 0 75 3 1 3 param num fraction of the number whose continued fractions to be found use fraction str number for more accurate results due to float inaccuracies return the continued fraction of rational number it is the all commas in the n 1 tuple notation continued_fraction fraction 2 2 continued_fraction fraction 3 245 3 4 12 4 continued_fraction fraction 2 25 2 4 continued_fraction 1 fraction 2 25 0 2 4 continued_fraction fraction 415 93 4 2 6 7 continued_fraction fraction 0 0 continued_fraction fraction 0 75 0 1 3 continued_fraction fraction 2 25 2 25 3 0 75 3 1 3
from fractions import Fraction from math import floor def continued_fraction(num: Fraction) -> list[int]: numerator, denominator = num.as_integer_ratio() continued_fraction_list: list[int] = [] while True: integer_part = floor(numerator / denominator) continued_fraction_list.append(integer_part) numerator -= integer_part * denominator if numerator == 0: break numerator, denominator = denominator, numerator return continued_fraction_list if __name__ == "__main__": import doctest doctest.testmod() print("Continued Fraction of 0.84375 is: ", continued_fraction(Fraction("0.84375")))
isolate the decimal part of a number https stackoverflow comquestions3886402howtogetnumbersafterdecimalpoint isolates the decimal part of a number if digitamount 0 round to that decimal place else print the entire decimal decimalisolate1 53 0 0 53 decimalisolate35 345 1 0 3 decimalisolate35 345 2 0 34 decimalisolate35 345 3 0 345 decimalisolate14 789 3 0 789 decimalisolate0 2 0 decimalisolate14 123 1 0 1 decimalisolate14 123 2 0 12 decimalisolate14 123 3 0 123 isolates the decimal part of a number if digitamount 0 round to that decimal place else print the entire decimal decimal_isolate 1 53 0 0 53 decimal_isolate 35 345 1 0 3 decimal_isolate 35 345 2 0 34 decimal_isolate 35 345 3 0 345 decimal_isolate 14 789 3 0 789 decimal_isolate 0 2 0 decimal_isolate 14 123 1 0 1 decimal_isolate 14 123 2 0 12 decimal_isolate 14 123 3 0 123
def decimal_isolate(number: float, digit_amount: int) -> float: if digit_amount > 0: return round(number - int(number), digit_amount) return number - int(number) if __name__ == "__main__": print(decimal_isolate(1.53, 0)) print(decimal_isolate(35.345, 1)) print(decimal_isolate(35.345, 2)) print(decimal_isolate(35.345, 3)) print(decimal_isolate(-14.789, 3)) print(decimal_isolate(0, 2)) print(decimal_isolate(-14.123, 1)) print(decimal_isolate(-14.123, 2)) print(decimal_isolate(-14.123, 3))
return a decimal number in its simplest fraction form decimaltofraction2 2 1 decimaltofraction89 89 1 decimaltofraction67 67 1 decimaltofraction45 0 45 1 decimaltofraction1 5 3 2 decimaltofraction6 25 25 4 decimaltofraction78td traceback most recent call last valueerror please enter a valid number return a decimal number in its simplest fraction form decimal_to_fraction 2 2 1 decimal_to_fraction 89 89 1 decimal_to_fraction 67 67 1 decimal_to_fraction 45 0 45 1 decimal_to_fraction 1 5 3 2 decimal_to_fraction 6 25 25 4 decimal_to_fraction 78td traceback most recent call last valueerror please enter a valid number
def decimal_to_fraction(decimal: float | str) -> tuple[int, int]: try: decimal = float(decimal) except ValueError: raise ValueError("Please enter a valid number") fractional_part = decimal - int(decimal) if fractional_part == 0: return int(decimal), 1 else: number_of_frac_digits = len(str(decimal).split(".")[1]) numerator = int(decimal * (10**number_of_frac_digits)) denominator = 10**number_of_frac_digits divisor, dividend = denominator, numerator while True: remainder = dividend % divisor if remainder == 0: break dividend, divisor = divisor, remainder numerator, denominator = numerator / divisor, denominator / divisor return int(numerator), int(denominator) if __name__ == "__main__": print(f"{decimal_to_fraction(2) = }") print(f"{decimal_to_fraction(89.0) = }") print(f"{decimal_to_fraction('67') = }") print(f"{decimal_to_fraction('45.0') = }") print(f"{decimal_to_fraction(1.5) = }") print(f"{decimal_to_fraction('6.25') = }") print(f"{decimal_to_fraction('78td') = }")
dodecahedron py a regular dodecahedron is a threedimensional figure made up of 12 pentagon faces having the same equal size calculates the surface area of a regular dodecahedron a 3 25 10 5 1 2 1 2 e2 where a is the area of the dodecahedron e is the length of the edge referencedodecahedron study com https study comacademylessondodecahedronvolumesurfaceareaformulas html param edge length of the edge of the dodecahedron type edge float return the surface area of the dodecahedron as a float tests dodecahedronsurfacearea5 516 1432201766901 dodecahedronsurfacearea10 2064 5728807067603 dodecahedronsurfacearea1 traceback most recent call last valueerror length must be a positive calculates the volume of a regular dodecahedron v 15 7 5 1 2 4 e3 where v is the volume of the dodecahedron e is the length of the edge referencedodecahedron study com https study comacademylessondodecahedronvolumesurfaceareaformulas html param edge length of the edge of the dodecahedron type edge float return the volume of the dodecahedron as a float tests dodecahedronvolume5 957 8898700780791 dodecahedronvolume10 7663 118960624633 dodecahedronvolume1 traceback most recent call last valueerror length must be a positive dodecahedron py a regular dodecahedron is a three dimensional figure made up of 12 pentagon faces having the same equal size calculates the surface area of a regular dodecahedron a 3 25 10 5 1 2 1 2 e 2 where a is the area of the dodecahedron e is the length of the edge reference dodecahedron study com https study com academy lesson dodecahedron volume surface area formulas html param edge length of the edge of the dodecahedron type edge float return the surface area of the dodecahedron as a float tests dodecahedron_surface_area 5 516 1432201766901 dodecahedron_surface_area 10 2064 5728807067603 dodecahedron_surface_area 1 traceback most recent call last valueerror length must be a positive calculates the volume of a regular dodecahedron v 15 7 5 1 2 4 e 3 where v is the volume of the dodecahedron e is the length of the edge reference dodecahedron study com https study com academy lesson dodecahedron volume surface area formulas html param edge length of the edge of the dodecahedron type edge float return the volume of the dodecahedron as a float tests dodecahedron_volume 5 957 8898700780791 dodecahedron_volume 10 7663 118960624633 dodecahedron_volume 1 traceback most recent call last valueerror length must be a positive
def dodecahedron_surface_area(edge: float) -> float: if edge <= 0 or not isinstance(edge, int): raise ValueError("Length must be a positive.") return 3 * ((25 + 10 * (5 ** (1 / 2))) ** (1 / 2)) * (edge**2) def dodecahedron_volume(edge: float) -> float: if edge <= 0 or not isinstance(edge, int): raise ValueError("Length must be a positive.") return ((15 + (7 * (5 ** (1 / 2)))) / 4) * (edge**3) if __name__ == "__main__": import doctest doctest.testmod()
compute double factorial using recursive method recursion can be costly for large numbers to learn about the theory behind this algorithm https en wikipedia orgwikidoublefactorial from math import prod alldoublefactorialrecursivei prodrangei 0 2 for i in range20 true doublefactorialrecursive0 1 traceback most recent call last valueerror doublefactorialrecursive only accepts integral values doublefactorialrecursive1 traceback most recent call last valueerror doublefactorialrecursive not defined for negative values compute double factorial using iterative method to learn about the theory behind this algorithm https en wikipedia orgwikidoublefactorial from math import prod alldoublefactorialiterativei prodrangei 0 2 for i in range20 true doublefactorialiterative0 1 traceback most recent call last valueerror doublefactorialiterative only accepts integral values doublefactorialiterative1 traceback most recent call last valueerror doublefactorialiterative not defined for negative values compute double factorial using recursive method recursion can be costly for large numbers to learn about the theory behind this algorithm https en wikipedia org wiki double_factorial from math import prod all double_factorial_recursive i prod range i 0 2 for i in range 20 true double_factorial_recursive 0 1 traceback most recent call last valueerror double_factorial_recursive only accepts integral values double_factorial_recursive 1 traceback most recent call last valueerror double_factorial_recursive not defined for negative values compute double factorial using iterative method to learn about the theory behind this algorithm https en wikipedia org wiki double_factorial from math import prod all double_factorial_iterative i prod range i 0 2 for i in range 20 true double_factorial_iterative 0 1 traceback most recent call last valueerror double_factorial_iterative only accepts integral values double_factorial_iterative 1 traceback most recent call last valueerror double_factorial_iterative not defined for negative values
def double_factorial_recursive(n: int) -> int: if not isinstance(n, int): raise ValueError("double_factorial_recursive() only accepts integral values") if n < 0: raise ValueError("double_factorial_recursive() not defined for negative values") return 1 if n <= 1 else n * double_factorial_recursive(n - 2) def double_factorial_iterative(num: int) -> int: if not isinstance(num, int): raise ValueError("double_factorial_iterative() only accepts integral values") if num < 0: raise ValueError("double_factorial_iterative() not defined for negative values") value = 1 for i in range(num, 0, -2): value *= i return value if __name__ == "__main__": import doctest doctest.testmod()
https en wikipedia orgwikiautomaticdifferentiationautomaticdifferentiationusingdualnumbers https blog jliszka org20131024exactnumericnthderivatives html note this only works for basic functions fx where the power of x is positive differentiatelambda x x2 2 2 2 differentiatelambda x x2 x4 9 2 196830 differentiatelambda y 0 5 y 3 6 3 5 4 7605 0 differentiatelambda y y 2 4 3 0 differentiate8 8 8 traceback most recent call last valueerror differentiate requires a function as input for func differentiatelambda x x 2 1 traceback most recent call last valueerror differentiate requires a float as input for position differentiatelambda x x2 3 traceback most recent call last valueerror differentiate requires an int as input for order https en wikipedia org wiki automatic_differentiation automatic_differentiation_using_dual_numbers https blog jliszka org 2013 10 24 exact numeric nth derivatives html note this only works for basic functions f x where the power of x is positive differentiate lambda x x 2 2 2 2 differentiate lambda x x 2 x 4 9 2 196830 differentiate lambda y 0 5 y 3 6 3 5 4 7605 0 differentiate lambda y y 2 4 3 0 differentiate 8 8 8 traceback most recent call last valueerror differentiate requires a function as input for func differentiate lambda x x 2 1 traceback most recent call last valueerror differentiate requires a float as input for position differentiate lambda x x 2 3 traceback most recent call last valueerror differentiate requires an int as input for order
from math import factorial class Dual: def __init__(self, real, rank): self.real = real if isinstance(rank, int): self.duals = [1] * rank else: self.duals = rank def __repr__(self): return ( f"{self.real}+" f"{'+'.join(str(dual)+'E'+str(n+1)for n,dual in enumerate(self.duals))}" ) def reduce(self): cur = self.duals.copy() while cur[-1] == 0: cur.pop(-1) return Dual(self.real, cur) def __add__(self, other): if not isinstance(other, Dual): return Dual(self.real + other, self.duals) s_dual = self.duals.copy() o_dual = other.duals.copy() if len(s_dual) > len(o_dual): o_dual.extend([1] * (len(s_dual) - len(o_dual))) elif len(s_dual) < len(o_dual): s_dual.extend([1] * (len(o_dual) - len(s_dual))) new_duals = [] for i in range(len(s_dual)): new_duals.append(s_dual[i] + o_dual[i]) return Dual(self.real + other.real, new_duals) __radd__ = __add__ def __sub__(self, other): return self + other * -1 def __mul__(self, other): if not isinstance(other, Dual): new_duals = [] for i in self.duals: new_duals.append(i * other) return Dual(self.real * other, new_duals) new_duals = [0] * (len(self.duals) + len(other.duals) + 1) for i, item in enumerate(self.duals): for j, jtem in enumerate(other.duals): new_duals[i + j + 1] += item * jtem for k in range(len(self.duals)): new_duals[k] += self.duals[k] * other.real for index in range(len(other.duals)): new_duals[index] += other.duals[index] * self.real return Dual(self.real * other.real, new_duals) __rmul__ = __mul__ def __truediv__(self, other): if not isinstance(other, Dual): new_duals = [] for i in self.duals: new_duals.append(i / other) return Dual(self.real / other, new_duals) raise ValueError def __floordiv__(self, other): if not isinstance(other, Dual): new_duals = [] for i in self.duals: new_duals.append(i // other) return Dual(self.real // other, new_duals) raise ValueError def __pow__(self, n): if n < 0 or isinstance(n, float): raise ValueError("power must be a positive integer") if n == 0: return 1 if n == 1: return self x = self for _ in range(n - 1): x *= self return x def differentiate(func, position, order): if not callable(func): raise ValueError("differentiate() requires a function as input for func") if not isinstance(position, (float, int)): raise ValueError("differentiate() requires a float as input for position") if not isinstance(order, int): raise ValueError("differentiate() requires an int as input for order") d = Dual(position, 1) result = func(d) if order == 0: return result.real return result.duals[order - 1] * factorial(order) if __name__ == "__main__": import doctest doctest.testmod() def f(y): return y**2 * y**4 print(differentiate(f, 9, 2))
usrbinenv python3 implementation of entropy of information https en wikipedia orgwikientropyinformationtheory this method takes path and two dict as argument and than calculates entropy of them param dict param dict return prints 1 entropy of information based on 1 alphabet 2 entropy of information based on couples of 2 alphabet 3 print entropy of hx nxn1 text from random books also random quotes text behind winstons back the voice from the telescreen was still babbling and the overfulfilment calculateprobtext 4 0 6 0 2 0 text the ministry of truthminitrue in newspeak newspeak was the official face in elegant lettering the three calculateprobtext 4 0 5 0 1 0 text had repulsive dashwoods suspicion sincerity but advantage now him remark easily garret nor nay civil those mrs enjoy shy fat merry you greatest jointure saw horrible he private he on be imagine suppose fertile beloved evident through no service elderly is blind there if every no so at own neglected you preferred way sincerity delivered his attempted to of message cottage windows do besides against uncivil delightful unreserved impossible few estimating men favourable see entreaties she propriety immediate was improving he or entrance humoured likewise moderate much nor game son say feel fat make met can must form into gate me we offending prevailed discovery calculateprobtext 4 0 7 0 3 0 what is our total sum of probabilities one length string for each alpha we go in our dict and if it is in it we calculate entropy print entropy two len string for each alpha two in size calculate entropy print second entropy print the difference between them convert text input into two dicts of counts the first dictionary stores the frequency of single character strings the second dictionary stores the frequency of two character strings first case when we have space at start text had repulsive dashwoods suspicion sincerity but advantage now him remark easily garret nor nay civil those mrs enjoy shy fat merry you greatest jointure saw horrible he private he on be imagine suppose fertile beloved evident through no service elderly is blind there if every no so at own neglected you preferred way sincerity delivered his attempted to of message cottage windows do besides against uncivil delightful unreserved impossible few estimating men favourable see entreaties she propriety immediate was improving he or entrance humoured likewise moderate much nor game son say feel fat make met can must form into gate me we offending prevailed discovery calculateprobtext usr bin env python3 implementation of entropy of information https en wikipedia org wiki entropy_ information_theory this method takes path and two dict as argument and than calculates entropy of them param dict param dict return prints 1 entropy of information based on 1 alphabet 2 entropy of information based on couples of 2 alphabet 3 print entropy of h x n xn 1 text from random books also random quotes text behind winston s back the voice from the telescreen was still babbling and the overfulfilment calculate_prob text 4 0 6 0 2 0 text the ministry of truth minitrue in newspeak newspeak was the official face in elegant lettering the three calculate_prob text 4 0 5 0 1 0 text had repulsive dashwoods suspicion sincerity but advantage now him remark easily garret nor nay civil those mrs enjoy shy fat merry you greatest jointure saw horrible he private he on be imagine suppose fertile beloved evident through no service elderly is blind there if every no so at own neglected you preferred way sincerity delivered his attempted to of message cottage windows do besides against uncivil delightful unreserved impossible few estimating men favourable see entreaties she propriety immediate was improving he or entrance humoured likewise moderate much nor game son say feel fat make met can must form into gate me we offending prevailed discovery calculate_prob text 4 0 7 0 3 0 what is our total sum of probabilities one length string for each alpha we go in our dict and if it is in it we calculate entropy entropy formula print entropy two len string for each alpha two in size calculate entropy print second entropy print the difference between them convert text input into two dicts of counts the first dictionary stores the frequency of single character strings the second dictionary stores the frequency of two character strings type ignore type ignore first case when we have space at start text had repulsive dashwoods suspicion sincerity but advantage now him remark easily garret nor nay civil those mrs enjoy shy fat merry you greatest jointure saw horrible he private he on be imagine suppose fertile beloved evident through no service elderly is blind there if every no so at own neglected you preferred way sincerity delivered his attempted to of message cottage windows do besides against uncivil delightful unreserved impossible few estimating men favourable see entreaties she propriety immediate was improving he or entrance humoured likewise moderate much nor game son say feel fat make met can must form into gate me we offending prevailed discovery calculate_prob text
from __future__ import annotations import math from collections import Counter from string import ascii_lowercase def calculate_prob(text: str) -> None: single_char_strings, two_char_strings = analyze_text(text) my_alphas = list(" " + ascii_lowercase) all_sum = sum(single_char_strings.values()) my_fir_sum = 0 for ch in my_alphas: if ch in single_char_strings: my_str = single_char_strings[ch] prob = my_str / all_sum my_fir_sum += prob * math.log2(prob) print(f"{round(-1 * my_fir_sum):.1f}") all_sum = sum(two_char_strings.values()) my_sec_sum = 0 for ch0 in my_alphas: for ch1 in my_alphas: sequence = ch0 + ch1 if sequence in two_char_strings: my_str = two_char_strings[sequence] prob = int(my_str) / all_sum my_sec_sum += prob * math.log2(prob) print(f"{round(-1 * my_sec_sum):.1f}") print(f"{round((-1 * my_sec_sum) - (-1 * my_fir_sum)):.1f}") def analyze_text(text: str) -> tuple[dict, dict]: single_char_strings = Counter() two_char_strings = Counter() single_char_strings[text[-1]] += 1 two_char_strings[" " + text[0]] += 1 for i in range(len(text) - 1): single_char_strings[text[i]] += 1 two_char_strings[text[i : i + 2]] += 1 return single_char_strings, two_char_strings def main(): import doctest doctest.testmod() if __name__ == "__main__": main()
calculate the distance between the two endpoints of two vectors a vector is defined as a list tuple or numpy 1d array euclideandistance0 0 2 2 2 8284271247461903 euclideandistancenp array0 0 0 np array2 2 2 3 4641016151377544 euclideandistancenp array1 2 3 4 np array5 6 7 8 8 0 euclideandistance1 2 3 4 5 6 7 8 8 0 calculate the distance between the two endpoints of two vectors without numpy a vector is defined as a list tuple or numpy 1d array euclideandistancenonp0 0 2 2 2 8284271247461903 euclideandistancenonp1 2 3 4 5 6 7 8 8 0 benchmarks noqa up007 noqa up007 calculate the distance between the two endpoints of two vectors a vector is defined as a list tuple or numpy 1d array euclidean_distance 0 0 2 2 2 8284271247461903 euclidean_distance np array 0 0 0 np array 2 2 2 3 4641016151377544 euclidean_distance np array 1 2 3 4 np array 5 6 7 8 8 0 euclidean_distance 1 2 3 4 5 6 7 8 8 0 calculate the distance between the two endpoints of two vectors without numpy a vector is defined as a list tuple or numpy 1d array euclidean_distance_no_np 0 0 2 2 2 8284271247461903 euclidean_distance_no_np 1 2 3 4 5 6 7 8 8 0 benchmarks
from __future__ import annotations import typing from collections.abc import Iterable import numpy as np Vector = typing.Union[Iterable[float], Iterable[int], np.ndarray] VectorOut = typing.Union[np.float64, int, float] def euclidean_distance(vector_1: Vector, vector_2: Vector) -> VectorOut: return np.sqrt(np.sum((np.asarray(vector_1) - np.asarray(vector_2)) ** 2)) def euclidean_distance_no_np(vector_1: Vector, vector_2: Vector) -> VectorOut: return sum((v1 - v2) ** 2 for v1, v2 in zip(vector_1, vector_2)) ** (1 / 2) if __name__ == "__main__": def benchmark() -> None: from timeit import timeit print("Without Numpy") print( timeit( "euclidean_distance_no_np([1, 2, 3], [4, 5, 6])", number=10000, globals=globals(), ) ) print("With Numpy") print( timeit( "euclidean_distance([1, 2, 3], [4, 5, 6])", number=10000, globals=globals(), ) ) benchmark()
calculate numeric solution at each step to an ode using euler s method for reference to euler s method refer to https en wikipedia orgwikieulermethod args odefunc callable the ordinary differential equation as a function of x and y y0 float the initial value for y x0 float the initial value for x stepsize float the increment value for x xend float the final value of x to be calculated returns np ndarray solution of y for every step in x the exact solution is math expx def fx y return y y0 1 y expliciteulerf y0 0 0 0 01 5 y1 144 77277243257308 calculate numeric solution at each step to an ode using euler s method for reference to euler s method refer to https en wikipedia org wiki euler_method args ode_func callable the ordinary differential equation as a function of x and y y0 float the initial value for y x0 float the initial value for x step_size float the increment value for x x_end float the final value of x to be calculated returns np ndarray solution of y for every step in x the exact solution is math exp x def f x y return y y0 1 y explicit_euler f y0 0 0 0 01 5 y 1 144 77277243257308
from collections.abc import Callable import numpy as np def explicit_euler( ode_func: Callable, y0: float, x0: float, step_size: float, x_end: float ) -> np.ndarray: n = int(np.ceil((x_end - x0) / step_size)) y = np.zeros((n + 1,)) y[0] = y0 x = x0 for k in range(n): y[k + 1] = y[k] + step_size * ode_func(x, y[k]) x += step_size return y if __name__ == "__main__": import doctest doctest.testmod()
calculate solution at each step to an ode using euler s modified method the euler method is straightforward to implement but can t give accurate solutions so some changes were proposed to improve accuracy https en wikipedia orgwikieulermethod arguments odefunc the ode as a function of x and y y0 the initial value for y x0 the initial value for x stepsize the increment value for x xend the end value for x the exact solution is math expx def f1x y return 2xy2 y eulermodifiedf1 1 0 0 0 0 2 1 0 y1 0 503338255442106 import math def f2x y return 2y x3math exp2x y eulermodifiedf2 1 0 0 0 0 1 0 3 y1 0 5525976431951775 calculate solution at each step to an ode using euler s modified method the euler method is straightforward to implement but can t give accurate solutions so some changes were proposed to improve accuracy https en wikipedia org wiki euler_method arguments ode_func the ode as a function of x and y y0 the initial value for y x0 the initial value for x stepsize the increment value for x x_end the end value for x the exact solution is math exp x def f1 x y return 2 x y 2 y euler_modified f1 1 0 0 0 0 2 1 0 y 1 0 503338255442106 import math def f2 x y return 2 y x 3 math exp 2 x y euler_modified f2 1 0 0 0 0 1 0 3 y 1 0 5525976431951775
from collections.abc import Callable import numpy as np def euler_modified( ode_func: Callable, y0: float, x0: float, step_size: float, x_end: float ) -> np.ndarray: n = int(np.ceil((x_end - x0) / step_size)) y = np.zeros((n + 1,)) y[0] = y0 x = x0 for k in range(n): y_get = y[k] + step_size * ode_func(x, y[k]) y[k + 1] = y[k] + ( (step_size / 2) * (ode_func(x, y[k]) + ode_func(x + step_size, y_get)) ) x += step_size return y if __name__ == "__main__": import doctest doctest.testmod()
eulers totient function finds the number of relative primes of a number n from 1 to n n 10 totientcalculation totientn for i in range1 n printfi has totientcalculationi relative primes 1 has 0 relative primes 2 has 1 relative primes 3 has 2 relative primes 4 has 2 relative primes 5 has 4 relative primes 6 has 2 relative primes 7 has 6 relative primes 8 has 4 relative primes 9 has 6 relative primes eulers totient function finds the number of relative primes of a number n from 1 to n n 10 totient_calculation totient n for i in range 1 n print f i has totient_calculation i relative primes 1 has 0 relative primes 2 has 1 relative primes 3 has 2 relative primes 4 has 2 relative primes 5 has 4 relative primes 6 has 2 relative primes 7 has 6 relative primes 8 has 4 relative primes 9 has 6 relative primes
def totient(n: int) -> list: is_prime = [True for i in range(n + 1)] totients = [i - 1 for i in range(n + 1)] primes = [] for i in range(2, n + 1): if is_prime[i]: primes.append(i) for j in range(len(primes)): if i * primes[j] >= n: break is_prime[i * primes[j]] = False if i % primes[j] == 0: totients[i * primes[j]] = totients[i] * primes[j] break totients[i * primes[j]] = totients[i] * (primes[j] - 1) return totients if __name__ == "__main__": import doctest doctest.testmod()
extended euclidean algorithm finds 2 numbers a and b such that it satisfies the equation am bn gcdm n a k a bezout s identity https en wikipedia orgwikiextendedeuclideanalgorithm s sharma silentcat date 20190225t12 08 5306 00 email silentcatprotonmail com last modified by pikulet last modified time 20201002 extended euclidean algorithm finds 2 numbers a and b such that it satisfies the equation am bn gcdm n a k a bezout s identity extendedeuclideanalgorithm1 24 1 0 extendedeuclideanalgorithm8 14 2 1 extendedeuclideanalgorithm240 46 9 47 extendedeuclideanalgorithm1 4 1 0 extendedeuclideanalgorithm2 4 1 0 extendedeuclideanalgorithm0 4 0 1 extendedeuclideanalgorithm2 0 1 0 base cases sign correction for negative numbers call extended euclidean algorithm if lensys argv 3 print2 integer arguments required return 1 a intsys argv1 b intsys argv2 printextendedeuclideanalgorithma b return 0 if name main raise systemexitmain s sharma silentcat date 2019 02 25t12 08 53 06 00 email silentcat protonmail com last modified by pikulet last modified time 2020 10 02 extended euclidean algorithm finds 2 numbers a and b such that it satisfies the equation am bn gcd m n a k a bezout s identity extended_euclidean_algorithm 1 24 1 0 extended_euclidean_algorithm 8 14 2 1 extended_euclidean_algorithm 240 46 9 47 extended_euclidean_algorithm 1 4 1 0 extended_euclidean_algorithm 2 4 1 0 extended_euclidean_algorithm 0 4 0 1 extended_euclidean_algorithm 2 0 1 0 base cases sign correction for negative numbers call extended euclidean algorithm
from __future__ import annotations import sys def extended_euclidean_algorithm(a: int, b: int) -> tuple[int, int]: if abs(a) == 1: return a, 0 elif abs(b) == 1: return 0, b old_remainder, remainder = a, b old_coeff_a, coeff_a = 1, 0 old_coeff_b, coeff_b = 0, 1 while remainder != 0: quotient = old_remainder // remainder old_remainder, remainder = remainder, old_remainder - quotient * remainder old_coeff_a, coeff_a = coeff_a, old_coeff_a - quotient * coeff_a old_coeff_b, coeff_b = coeff_b, old_coeff_b - quotient * coeff_b if a < 0: old_coeff_a = -old_coeff_a if b < 0: old_coeff_b = -old_coeff_b return old_coeff_a, old_coeff_b def main(): if len(sys.argv) < 3: print("2 integer arguments required") return 1 a = int(sys.argv[1]) b = int(sys.argv[2]) print(extended_euclidean_algorithm(a, b)) return 0 if __name__ == "__main__": raise SystemExit(main())
factorial of a positive integer https en wikipedia orgwikifactorial calculate the factorial of specified number n import math allfactoriali math factoriali for i in range20 true factorial0 1 traceback most recent call last valueerror factorial only accepts integral values factorial1 traceback most recent call last valueerror factorial not defined for negative values factorial1 1 factorial6 720 factorial0 1 calculate the factorial of a positive integer https en wikipedia orgwikifactorial import math allfactoriali math factoriali for i in range20 true factorial0 1 traceback most recent call last valueerror factorial only accepts integral values factorial1 traceback most recent call last valueerror factorial not defined for negative values calculate the factorial of specified number n import math all factorial i math factorial i for i in range 20 true factorial 0 1 traceback most recent call last valueerror factorial only accepts integral values factorial 1 traceback most recent call last valueerror factorial not defined for negative values factorial 1 1 factorial 6 720 factorial 0 1 calculate the factorial of a positive integer https en wikipedia org wiki factorial import math all factorial i math factorial i for i in range 20 true factorial 0 1 traceback most recent call last valueerror factorial only accepts integral values factorial 1 traceback most recent call last valueerror factorial not defined for negative values
def factorial(number: int) -> int: if number != int(number): raise ValueError("factorial() only accepts integral values") if number < 0: raise ValueError("factorial() not defined for negative values") value = 1 for i in range(1, number + 1): value *= i return value def factorial_recursive(n: int) -> int: if not isinstance(n, int): raise ValueError("factorial() only accepts integral values") if n < 0: raise ValueError("factorial() not defined for negative values") return 1 if n in {0, 1} else n * factorial(n - 1) if __name__ == "__main__": import doctest doctest.testmod() n = int(input("Enter a positive integer: ").strip() or 0) print(f"factorial{n} is {factorial(n)}")
factorsofanumber1 1 factorsofanumber5 1 5 factorsofanumber24 1 2 3 4 6 8 12 24 factorsofanumber24 factors_of_a_number 1 1 factors_of_a_number 5 1 5 factors_of_a_number 24 1 2 3 4 6 8 12 24 factors_of_a_number 24 if i is a factor of num num i is the other factor of num if d and i are distinct we have found another factor
from doctest import testmod from math import sqrt def factors_of_a_number(num: int) -> list: facs: list[int] = [] if num < 1: return facs facs.append(1) if num == 1: return facs facs.append(num) for i in range(2, int(sqrt(num)) + 1): if num % i == 0: facs.append(i) d = num // i if d != i: facs.append(d) facs.sort() return facs if __name__ == "__main__": testmod(name="factors_of_a_number", verbose=True)
fast inverse square root 1sqrtx using the quake iii algorithm reference https en wikipedia orgwikifastinversesquareroot accuracy https en wikipedia orgwikifastinversesquarerootaccuracy compute the fast inverse square root of a floatingpoint number using the famous quake iii algorithm param float number input number for which to calculate the inverse square root return float the fast inverse square root of the input number example fastinversesqrt10 0 3156857923527257 fastinversesqrt4 0 49915357479239103 fastinversesqrt4 1 0 4932849504615651 fastinversesqrt0 traceback most recent call last valueerror input must be a positive number fastinversesqrt1 traceback most recent call last valueerror input must be a positive number from math import isclose sqrt allisclosefastinversesqrti 1 sqrti reltol0 00132 for i in range50 60 true https en wikipedia orgwikifastinversesquarerootaccuracy compute the fast inverse square root of a floating point number using the famous quake iii algorithm param float number input number for which to calculate the inverse square root return float the fast inverse square root of the input number example fast_inverse_sqrt 10 0 3156857923527257 fast_inverse_sqrt 4 0 49915357479239103 fast_inverse_sqrt 4 1 0 4932849504615651 fast_inverse_sqrt 0 traceback most recent call last valueerror input must be a positive number fast_inverse_sqrt 1 traceback most recent call last valueerror input must be a positive number from math import isclose sqrt all isclose fast_inverse_sqrt i 1 sqrt i rel_tol 0 00132 for i in range 50 60 true https en wikipedia org wiki fast_inverse_square_root accuracy
import struct def fast_inverse_sqrt(number: float) -> float: if number <= 0: raise ValueError("Input must be a positive number.") i = struct.unpack(">i", struct.pack(">f", number))[0] i = 0x5F3759DF - (i >> 1) y = struct.unpack(">f", struct.pack(">i", i))[0] return y * (1.5 - 0.5 * number * y * y) if __name__ == "__main__": from doctest import testmod testmod() from math import sqrt for i in range(5, 101, 5): print(f"{i:>3}: {(1 / sqrt(i)) - fast_inverse_sqrt(i):.5f}")
python program to show the usage of fermat s little theorem in a division according to fermat s little theorem a b mod p always equals a b p 2 mod p here we assume that p is a prime number b divides a and p doesn t divide b wikipedia reference https en wikipedia orgwikifermat27slittletheorem a prime number using binary exponentiation function ologp using python operators python program to show the usage of fermat s little theorem in a division according to fermat s little theorem a b mod p always equals a b p 2 mod p here we assume that p is a prime number b divides a and p doesn t divide b wikipedia reference https en wikipedia org wiki fermat 27s_little_theorem a prime number using binary exponentiation function o log p using python operators
def binary_exponentiation(a: int, n: float, mod: int) -> int: if n == 0: return 1 elif n % 2 == 1: return (binary_exponentiation(a, n - 1, mod) * a) % mod else: b = binary_exponentiation(a, n / 2, mod) return (b * b) % mod p = 701 a = 1000000000 b = 10 print((a / b) % p == (a * binary_exponentiation(b, p - 2, p)) % p) print((a / b) % p == (a * b ** (p - 2)) % p)
calculates the fibonacci sequence using iteration recursion memoization and a simplified form of binet s formula note 1 the iterative recursive memoization functions are more accurate than the binet s formula function because the binet formula function uses floats note 2 the binet s formula function is much more limited in the size of inputs that it can handle due to the size limitations of python floats see benchmark numbers in main for performance comparisons https en wikipedia orgwikifibonaccinumber for more information times the execution of a function with parameters calculates the first n 1indexed fibonacci numbers using iteration with yield listfibiterativeyield0 0 tuplefibiterativeyield1 0 1 tuplefibiterativeyield5 0 1 1 2 3 5 tuplefibiterativeyield10 0 1 1 2 3 5 8 13 21 34 55 tuplefibiterativeyield1 traceback most recent call last valueerror n is negative calculates the first n 0indexed fibonacci numbers using iteration fibiterative0 0 fibiterative1 0 1 fibiterative5 0 1 1 2 3 5 fibiterative10 0 1 1 2 3 5 8 13 21 34 55 fibiterative1 traceback most recent call last valueerror n is negative calculates the first n 0indexed fibonacci numbers using recursion fibiterative0 0 fibiterative1 0 1 fibiterative5 0 1 1 2 3 5 fibiterative10 0 1 1 2 3 5 8 13 21 34 55 fibiterative1 traceback most recent call last valueerror n is negative calculates the ith 0indexed fibonacci number using recursion fibrecursiveterm0 0 fibrecursiveterm1 1 fibrecursiveterm5 5 fibrecursiveterm10 55 fibrecursiveterm1 traceback most recent call last exception n is negative calculates the first n 0indexed fibonacci numbers using recursion fibiterative0 0 fibiterative1 0 1 fibiterative5 0 1 1 2 3 5 fibiterative10 0 1 1 2 3 5 8 13 21 34 55 fibiterative1 traceback most recent call last valueerror n is negative calculates the ith 0indexed fibonacci number using recursion calculates the first n 0indexed fibonacci numbers using memoization fibmemoization0 0 fibmemoization1 0 1 fibmemoization5 0 1 1 2 3 5 fibmemoization10 0 1 1 2 3 5 8 13 21 34 55 fibiterative1 traceback most recent call last valueerror n is negative cache must be outside recursuive function other it will reset every time it calls itself calculates the first n 0indexed fibonacci numbers using a simplified form of binet s formula https en m wikipedia orgwikifibonaccinumbercomputationbyrounding note 1 this function diverges from fibiterative at around n 71 likely due to compounding floatingpoint arithmetic errors note 2 this function doesn t accept n 1475 because it overflows thereafter due to the size limitations of python floats fibbinet0 0 fibbinet1 0 1 fibbinet5 0 1 1 2 3 5 fibbinet10 0 1 1 2 3 5 8 13 21 34 55 fibbinet1 traceback most recent call last valueerror n is negative fibbinet1475 traceback most recent call last valueerror n is too large time on an m1 macbook pro fastest to slowest times the execution of a function with parameters calculates the first n 1 indexed fibonacci numbers using iteration with yield list fib_iterative_yield 0 0 tuple fib_iterative_yield 1 0 1 tuple fib_iterative_yield 5 0 1 1 2 3 5 tuple fib_iterative_yield 10 0 1 1 2 3 5 8 13 21 34 55 tuple fib_iterative_yield 1 traceback most recent call last valueerror n is negative calculates the first n 0 indexed fibonacci numbers using iteration fib_iterative 0 0 fib_iterative 1 0 1 fib_iterative 5 0 1 1 2 3 5 fib_iterative 10 0 1 1 2 3 5 8 13 21 34 55 fib_iterative 1 traceback most recent call last valueerror n is negative calculates the first n 0 indexed fibonacci numbers using recursion fib_iterative 0 0 fib_iterative 1 0 1 fib_iterative 5 0 1 1 2 3 5 fib_iterative 10 0 1 1 2 3 5 8 13 21 34 55 fib_iterative 1 traceback most recent call last valueerror n is negative calculates the i th 0 indexed fibonacci number using recursion fib_recursive_term 0 0 fib_recursive_term 1 1 fib_recursive_term 5 5 fib_recursive_term 10 55 fib_recursive_term 1 traceback most recent call last exception n is negative calculates the first n 0 indexed fibonacci numbers using recursion fib_iterative 0 0 fib_iterative 1 0 1 fib_iterative 5 0 1 1 2 3 5 fib_iterative 10 0 1 1 2 3 5 8 13 21 34 55 fib_iterative 1 traceback most recent call last valueerror n is negative calculates the i th 0 indexed fibonacci number using recursion calculates the first n 0 indexed fibonacci numbers using memoization fib_memoization 0 0 fib_memoization 1 0 1 fib_memoization 5 0 1 1 2 3 5 fib_memoization 10 0 1 1 2 3 5 8 13 21 34 55 fib_iterative 1 traceback most recent call last valueerror n is negative cache must be outside recursuive function other it will reset every time it calls itself prefilled cache calculates the first n 0 indexed fibonacci numbers using a simplified form of binet s formula https en m wikipedia org wiki fibonacci_number computation_by_rounding note 1 this function diverges from fib_iterative at around n 71 likely due to compounding floating point arithmetic errors note 2 this function doesn t accept n 1475 because it overflows thereafter due to the size limitations of python floats fib_binet 0 0 fib_binet 1 0 1 fib_binet 5 0 1 1 2 3 5 fib_binet 10 0 1 1 2 3 5 8 13 21 34 55 fib_binet 1 traceback most recent call last valueerror n is negative fib_binet 1475 traceback most recent call last valueerror n is too large time on an m1 macbook pro fastest to slowest 0 0012 ms 0 0031 ms 0 0062 ms 0 0100 ms 0 0153 ms 257 0910 ms
import functools from collections.abc import Iterator from math import sqrt from time import time def time_func(func, *args, **kwargs): start = time() output = func(*args, **kwargs) end = time() if int(end - start) > 0: print(f"{func.__name__} runtime: {(end - start):0.4f} s") else: print(f"{func.__name__} runtime: {(end - start) * 1000:0.4f} ms") return output def fib_iterative_yield(n: int) -> Iterator[int]: if n < 0: raise ValueError("n is negative") a, b = 0, 1 yield a for _ in range(n): yield b a, b = b, a + b def fib_iterative(n: int) -> list[int]: if n < 0: raise ValueError("n is negative") if n == 0: return [0] fib = [0, 1] for _ in range(n - 1): fib.append(fib[-1] + fib[-2]) return fib def fib_recursive(n: int) -> list[int]: def fib_recursive_term(i: int) -> int: if i < 0: raise ValueError("n is negative") if i < 2: return i return fib_recursive_term(i - 1) + fib_recursive_term(i - 2) if n < 0: raise ValueError("n is negative") return [fib_recursive_term(i) for i in range(n + 1)] def fib_recursive_cached(n: int) -> list[int]: @functools.cache def fib_recursive_term(i: int) -> int: if i < 0: raise ValueError("n is negative") if i < 2: return i return fib_recursive_term(i - 1) + fib_recursive_term(i - 2) if n < 0: raise ValueError("n is negative") return [fib_recursive_term(i) for i in range(n + 1)] def fib_memoization(n: int) -> list[int]: if n < 0: raise ValueError("n is negative") cache: dict[int, int] = {0: 0, 1: 1, 2: 1} def rec_fn_memoized(num: int) -> int: if num in cache: return cache[num] value = rec_fn_memoized(num - 1) + rec_fn_memoized(num - 2) cache[num] = value return value return [rec_fn_memoized(i) for i in range(n + 1)] def fib_binet(n: int) -> list[int]: if n < 0: raise ValueError("n is negative") if n >= 1475: raise ValueError("n is too large") sqrt_5 = sqrt(5) phi = (1 + sqrt_5) / 2 return [round(phi**i / sqrt_5) for i in range(n + 1)] if __name__ == "__main__": from doctest import testmod testmod() num = 30 time_func(fib_iterative_yield, num) time_func(fib_iterative, num) time_func(fib_binet, num) time_func(fib_memoization, num) time_func(fib_recursive_cached, num) time_func(fib_recursive, num)
for nums in 3 2 1 3 2 1 3 3 0 3 0 3 1 2 9 findmaxiterativenums maxnums true true true true findmaxiterative2 4 9 7 19 94 5 94 findmaxiterative traceback most recent call last valueerror findmaxiterative arg is an empty sequence divide and conquer algorithm find max value in list param nums contains elements param left index of first element param right index of last element return max in nums for nums in 3 2 1 3 2 1 3 3 0 3 0 3 1 2 9 findmaxrecursivenums 0 lennums 1 maxnums true true true true nums 1 3 5 7 9 2 4 6 8 10 findmaxrecursivenums 0 lennums 1 maxnums true findmaxrecursive 0 0 traceback most recent call last valueerror findmaxrecursive arg is an empty sequence findmaxrecursivenums 0 lennums maxnums traceback most recent call last indexerror list index out of range findmaxrecursivenums lennums 1 maxnums true findmaxrecursivenums lennums 1 1 maxnums traceback most recent call last indexerror list index out of range for nums in 3 2 1 3 2 1 3 3 0 3 0 3 1 2 9 find_max_iterative nums max nums true true true true find_max_iterative 2 4 9 7 19 94 5 94 find_max_iterative traceback most recent call last valueerror find_max_iterative arg is an empty sequence divide and conquer algorithm find max value in list param nums contains elements param left index of first element param right index of last element return max in nums for nums in 3 2 1 3 2 1 3 3 0 3 0 3 1 2 9 find_max_recursive nums 0 len nums 1 max nums true true true true nums 1 3 5 7 9 2 4 6 8 10 find_max_recursive nums 0 len nums 1 max nums true find_max_recursive 0 0 traceback most recent call last valueerror find_max_recursive arg is an empty sequence find_max_recursive nums 0 len nums max nums traceback most recent call last indexerror list index out of range find_max_recursive nums len nums 1 max nums true find_max_recursive nums len nums 1 1 max nums traceback most recent call last indexerror list index out of range the middle find max in range left mid find max in range mid 1 right
from __future__ import annotations def find_max_iterative(nums: list[int | float]) -> int | float: if len(nums) == 0: raise ValueError("find_max_iterative() arg is an empty sequence") max_num = nums[0] for x in nums: if x > max_num: max_num = x return max_num def find_max_recursive(nums: list[int | float], left: int, right: int) -> int | float: if len(nums) == 0: raise ValueError("find_max_recursive() arg is an empty sequence") if ( left >= len(nums) or left < -len(nums) or right >= len(nums) or right < -len(nums) ): raise IndexError("list index out of range") if left == right: return nums[left] mid = (left + right) >> 1 left_max = find_max_recursive(nums, left, mid) right_max = find_max_recursive( nums, mid + 1, right ) return left_max if left_max >= right_max else right_max if __name__ == "__main__": import doctest doctest.testmod(verbose=True)
find minimum number in a list param nums contains elements return min number in list for nums in 3 2 1 3 2 1 3 3 0 3 0 3 1 2 9 findminiterativenums minnums true true true true findminiterative0 1 2 3 4 5 3 24 56 56 findminiterative traceback most recent call last valueerror findminiterative arg is an empty sequence divide and conquer algorithm find min value in list param nums contains elements param left index of first element param right index of last element return min in nums for nums in 3 2 1 3 2 1 3 3 0 3 0 3 1 2 9 findminrecursivenums 0 lennums 1 minnums true true true true nums 1 3 5 7 9 2 4 6 8 10 findminrecursivenums 0 lennums 1 minnums true findminrecursive 0 0 traceback most recent call last valueerror findminrecursive arg is an empty sequence findminrecursivenums 0 lennums minnums traceback most recent call last indexerror list index out of range findminrecursivenums lennums 1 minnums true findminrecursivenums lennums 1 1 minnums traceback most recent call last indexerror list index out of range find minimum number in a list param nums contains elements return min number in list for nums in 3 2 1 3 2 1 3 3 0 3 0 3 1 2 9 find_min_iterative nums min nums true true true true find_min_iterative 0 1 2 3 4 5 3 24 56 56 find_min_iterative traceback most recent call last valueerror find_min_iterative arg is an empty sequence divide and conquer algorithm find min value in list param nums contains elements param left index of first element param right index of last element return min in nums for nums in 3 2 1 3 2 1 3 3 0 3 0 3 1 2 9 find_min_recursive nums 0 len nums 1 min nums true true true true nums 1 3 5 7 9 2 4 6 8 10 find_min_recursive nums 0 len nums 1 min nums true find_min_recursive 0 0 traceback most recent call last valueerror find_min_recursive arg is an empty sequence find_min_recursive nums 0 len nums min nums traceback most recent call last indexerror list index out of range find_min_recursive nums len nums 1 min nums true find_min_recursive nums len nums 1 1 min nums traceback most recent call last indexerror list index out of range the middle find min in range left mid find min in range mid 1 right
from __future__ import annotations def find_min_iterative(nums: list[int | float]) -> int | float: if len(nums) == 0: raise ValueError("find_min_iterative() arg is an empty sequence") min_num = nums[0] for num in nums: min_num = min(min_num, num) return min_num def find_min_recursive(nums: list[int | float], left: int, right: int) -> int | float: if len(nums) == 0: raise ValueError("find_min_recursive() arg is an empty sequence") if ( left >= len(nums) or left < -len(nums) or right >= len(nums) or right < -len(nums) ): raise IndexError("list index out of range") if left == right: return nums[left] mid = (left + right) >> 1 left_min = find_min_recursive(nums, left, mid) right_min = find_min_recursive( nums, mid + 1, right ) return left_min if left_min <= right_min else right_min if __name__ == "__main__": import doctest doctest.testmod(verbose=True)
https en wikipedia orgwikifloorandceilingfunctions return the floor of x as an integral param x the number return the largest integer x import math allfloorn math floorn for n in 1 1 0 0 1 1 1 1 1 0 1 0 1000000000 true return the floor of x as an integral param x the number return the largest integer x import math all floor n math floor n for n in 1 1 0 0 1 1 1 1 1 0 1 0 1_000_000_000 true
def floor(x: float) -> int: return int(x) if x - int(x) >= 0 else int(x) - 1 if __name__ == "__main__": import doctest doctest.testmod()
gamma function is a very useful tool in math and physics it helps calculating complex integral in a convenient way for more info https en wikipedia orgwikigammafunction in mathematics the gamma function is one commonly used extension of the factorial function to complex numbers the gamma function is defined for all complex numbers except the nonpositive integers python s standard library math gamma function overflows around gamma171 624 calculates the value of gamma function of num where num is either an integer 1 2 3 or a halfinteger 0 5 1 5 2 5 gammaiterative1 traceback most recent call last valueerror math domain error gammaiterative0 traceback most recent call last valueerror math domain error gammaiterative9 40320 0 from math import gamma as mathgamma all 99999999 gammaiterativei mathgammai 1 000000001 for i in range1 50 true gammaiterative1mathgamma1 1 000000001 traceback most recent call last valueerror math domain error gammaiterative3 3 mathgamma3 3 0 00000001 true calculates the value of gamma function of num where num is either an integer 1 2 3 or a halfinteger 0 5 1 5 2 5 implemented using recursion examples from math import isclose gamma as mathgamma gammarecursive0 5 1 7724538509055159 gammarecursive1 1 0 gammarecursive2 1 0 gammarecursive3 5 3 3233509704478426 gammarecursive171 5 9 483367566824795e307 allisclosegammarecursivenum mathgammanum for num in 0 5 2 3 5 171 5 true gammarecursive0 traceback most recent call last valueerror math domain error gammarecursive1 1 traceback most recent call last valueerror math domain error gammarecursive4 traceback most recent call last valueerror math domain error gammarecursive172 traceback most recent call last overflowerror math range error gammarecursive1 1 traceback most recent call last notimplementederror num must be an integer or a halfinteger calculates the value of gamma function of num where num is either an integer 1 2 3 or a half integer 0 5 1 5 2 5 gamma_iterative 1 traceback most recent call last valueerror math domain error gamma_iterative 0 traceback most recent call last valueerror math domain error gamma_iterative 9 40320 0 from math import gamma as math_gamma all 99999999 gamma_iterative i math_gamma i 1 000000001 for i in range 1 50 true gamma_iterative 1 math_gamma 1 1 000000001 traceback most recent call last valueerror math domain error gamma_iterative 3 3 math_gamma 3 3 0 00000001 true calculates the value of gamma function of num where num is either an integer 1 2 3 or a half integer 0 5 1 5 2 5 implemented using recursion examples from math import isclose gamma as math_gamma gamma_recursive 0 5 1 7724538509055159 gamma_recursive 1 1 0 gamma_recursive 2 1 0 gamma_recursive 3 5 3 3233509704478426 gamma_recursive 171 5 9 483367566824795e 307 all isclose gamma_recursive num math_gamma num for num in 0 5 2 3 5 171 5 true gamma_recursive 0 traceback most recent call last valueerror math domain error gamma_recursive 1 1 traceback most recent call last valueerror math domain error gamma_recursive 4 traceback most recent call last valueerror math domain error gamma_recursive 172 traceback most recent call last overflowerror math range error gamma_recursive 1 1 traceback most recent call last notimplementederror num must be an integer or a half integer
import math from numpy import inf from scipy.integrate import quad def gamma_iterative(num: float) -> float: if num <= 0: raise ValueError("math domain error") return quad(integrand, 0, inf, args=(num))[0] def integrand(x: float, z: float) -> float: return math.pow(x, z - 1) * math.exp(-x) def gamma_recursive(num: float) -> float: if num <= 0: raise ValueError("math domain error") if num > 171.5: raise OverflowError("math range error") elif num - int(num) not in (0, 0.5): raise NotImplementedError("num must be an integer or a half-integer") elif num == 0.5: return math.sqrt(math.pi) else: return 1.0 if num == 1 else (num - 1) * gamma_recursive(num - 1) if __name__ == "__main__": from doctest import testmod testmod() num = 1.0 while num: num = float(input("Gamma of: ")) print(f"gamma_iterative({num}) = {gamma_iterative(num)}") print(f"gamma_recursive({num}) = {gamma_recursive(num)}") print("\nEnter 0 to exit...")
reference https en wikipedia orgwikigaussianfunction gaussian1 0 24197072451914337 gaussian24 3 342714441794458e126 gaussian1 4 2 0 06475879783294587 gaussian1 5 3 0 05467002489199788 supports numpy arrays use numpy meshgrid with this to generate gaussian blur on images import numpy as np x np arange15 gaussianx array3 98942280e01 2 41970725e01 5 39909665e02 4 43184841e03 1 33830226e04 1 48671951e06 6 07588285e09 9 13472041e12 5 05227108e15 1 02797736e18 7 69459863e23 2 11881925e27 2 14638374e32 7 99882776e38 1 09660656e43 gaussian15 5 530709549844416e50 gaussian1 2 string traceback most recent call last typeerror unsupported operand types for list and float gaussian hello world traceback most recent call last typeerror unsupported operand types for str and float gaussian10234 doctest ignoreexceptiondetail traceback most recent call last overflowerror 34 result too large gaussian10326 0 3989422804014327 gaussian2523 mu234234 sigma3425 0 0 gaussian 1 0 24197072451914337 gaussian 24 3 342714441794458e 126 gaussian 1 4 2 0 06475879783294587 gaussian 1 5 3 0 05467002489199788 supports numpy arrays use numpy meshgrid with this to generate gaussian blur on images import numpy as np x np arange 15 gaussian x array 3 98942280e 01 2 41970725e 01 5 39909665e 02 4 43184841e 03 1 33830226e 04 1 48671951e 06 6 07588285e 09 9 13472041e 12 5 05227108e 15 1 02797736e 18 7 69459863e 23 2 11881925e 27 2 14638374e 32 7 99882776e 38 1 09660656e 43 gaussian 15 5 530709549844416e 50 gaussian 1 2 string traceback most recent call last typeerror unsupported operand type s for list and float gaussian hello world traceback most recent call last typeerror unsupported operand type s for str and float gaussian 10 234 doctest ignore_exception_detail traceback most recent call last overflowerror 34 result too large gaussian 10 326 0 3989422804014327 gaussian 2523 mu 234234 sigma 3425 0 0
from numpy import exp, pi, sqrt def gaussian(x, mu: float = 0.0, sigma: float = 1.0) -> int: return 1 / sqrt(2 * pi * sigma**2) * exp(-((x - mu) ** 2) / (2 * sigma**2)) if __name__ == "__main__": import doctest doctest.testmod()
this script demonstrates an implementation of the gaussian error linear unit function https en wikipedia orgwikiactivationfunctioncomparisonofactivationfunctions the function takes a vector of k real numbers as input and returns x sigmoid1 702x gaussian error linear unit gelu is a highperforming neural network activation function this script is inspired by a corresponding research paper https arxiv orgabs1606 08415 mathematical function sigmoid takes a vector x of k real numbers as input and returns 1 1 ex https en wikipedia orgwikisigmoidfunction sigmoidnp array1 0 1 0 2 0 array0 26894142 0 73105858 0 88079708 implements the gaussian error linear unit gelu function parameters vector np ndarray a numpy array of shape 1 n consisting of real values returns geluvec np ndarray the input numpy array after applying gelu examples gaussianerrorlinearunitnp array1 0 1 0 2 0 array0 15420423 0 84579577 1 93565862 gaussianerrorlinearunitnp array3 array0 01807131 mathematical function sigmoid takes a vector x of k real numbers as input and returns 1 1 e x https en wikipedia org wiki sigmoid_function sigmoid np array 1 0 1 0 2 0 array 0 26894142 0 73105858 0 88079708 implements the gaussian error linear unit gelu function parameters vector np ndarray a numpy array of shape 1 n consisting of real values returns gelu_vec np ndarray the input numpy array after applying gelu examples gaussian_error_linear_unit np array 1 0 1 0 2 0 array 0 15420423 0 84579577 1 93565862 gaussian_error_linear_unit np array 3 array 0 01807131
import numpy as np def sigmoid(vector: np.ndarray) -> np.ndarray: return 1 / (1 + np.exp(-vector)) def gaussian_error_linear_unit(vector: np.ndarray) -> np.ndarray: return vector * sigmoid(1.702 * vector) if __name__ == "__main__": import doctest doctest.testmod()
a sophie germain prime is any prime p where 2p 1 is also prime the second number 2p 1 is called a safe prime examples of germain primes include 2 3 5 11 23 their corresponding safe primes 5 7 11 23 47 https en wikipedia orgwikisafeandsophiegermainprimes checks if input number and 2number 1 are prime isgermainprime3 true isgermainprime11 true isgermainprime4 false isgermainprime23 true isgermainprime13 false isgermainprime20 false isgermainprime abc traceback most recent call last typeerror input value must be a positive integer input value abc checks if input number and number 12 are prime the smallest safe prime is 5 with the germain prime is 2 issafeprime5 true issafeprime11 true issafeprime1 false issafeprime2 false issafeprime3 false issafeprime47 true issafeprime abc traceback most recent call last typeerror input value must be a positive integer input value abc checks if input number and 2 number 1 are prime is_germain_prime 3 true is_germain_prime 11 true is_germain_prime 4 false is_germain_prime 23 true is_germain_prime 13 false is_germain_prime 20 false is_germain_prime abc traceback most recent call last typeerror input value must be a positive integer input value abc checks if input number and number 1 2 are prime the smallest safe prime is 5 with the germain prime is 2 is_safe_prime 5 true is_safe_prime 11 true is_safe_prime 1 false is_safe_prime 2 false is_safe_prime 3 false is_safe_prime 47 true is_safe_prime abc traceback most recent call last typeerror input value must be a positive integer input value abc
from maths.prime_check import is_prime def is_germain_prime(number: int) -> bool: if not isinstance(number, int) or number < 1: msg = f"Input value must be a positive integer. Input value: {number}" raise TypeError(msg) return is_prime(number) and is_prime(2 * number + 1) def is_safe_prime(number: int) -> bool: if not isinstance(number, int) or number < 1: msg = f"Input value must be a positive integer. Input value: {number}" raise TypeError(msg) return (number - 1) % 2 == 0 and is_prime(number) and is_prime((number - 1) // 2) if __name__ == "__main__": from doctest import testmod testmod()
greatest common divisor wikipedia reference https en wikipedia orgwikigreatestcommondivisor gcda b gcda b gcda b gcda b by definition of divisibility calculate greatest common divisor gcd greatestcommondivisor24 40 8 greatestcommondivisor1 1 1 greatestcommondivisor1 800 1 greatestcommondivisor11 37 1 greatestcommondivisor3 5 1 greatestcommondivisor16 4 4 greatestcommondivisor3 9 3 greatestcommondivisor9 3 3 greatestcommondivisor3 9 3 greatestcommondivisor3 9 3 below method is more memory efficient because it does not create additional stack frames for recursive functions calls as done in the above method gcdbyiterative24 40 8 greatestcommondivisor24 40 gcdbyiterative24 40 true gcdbyiterative3 9 3 gcdbyiterative3 9 3 gcdbyiterative1 800 1 gcdbyiterative11 37 1 call greatest common divisor function calculate greatest common divisor gcd greatest_common_divisor 24 40 8 greatest_common_divisor 1 1 1 greatest_common_divisor 1 800 1 greatest_common_divisor 11 37 1 greatest_common_divisor 3 5 1 greatest_common_divisor 16 4 4 greatest_common_divisor 3 9 3 greatest_common_divisor 9 3 3 greatest_common_divisor 3 9 3 greatest_common_divisor 3 9 3 below method is more memory efficient because it does not create additional stack frames for recursive functions calls as done in the above method gcd_by_iterative 24 40 8 greatest_common_divisor 24 40 gcd_by_iterative 24 40 true gcd_by_iterative 3 9 3 gcd_by_iterative 3 9 3 gcd_by_iterative 1 800 1 gcd_by_iterative 11 37 1 when y 0 then loop will terminate and return x as final gcd call greatest common divisor function
def greatest_common_divisor(a: int, b: int) -> int: return abs(b) if a == 0 else greatest_common_divisor(b % a, a) def gcd_by_iterative(x: int, y: int) -> int: while y: x, y = y, x % y return abs(x) def main(): try: nums = input("Enter two integers separated by comma (,): ").split(",") num_1 = int(nums[0]) num_2 = int(nums[1]) print( f"greatest_common_divisor({num_1}, {num_2}) = " f"{greatest_common_divisor(num_1, num_2)}" ) print(f"By iterative gcd({num_1}, {num_2}) = {gcd_by_iterative(num_1, num_2)}") except (IndexError, UnboundLocalError, ValueError): print("Wrong input") if __name__ == "__main__": main()
this theorem states that the number of prime factors of n will be approximately loglogn for most natural numbers n exactprimefactorcount51242183 3 the n input value must be odd so that we can skip one element ie i 2 this condition checks the prime number n is greater than 2 the number of distinct prime factors isare 3 the value of loglogn is 2 8765 this theorem states that the number of prime factors of n will be approximately log log n for most natural numbers n exact_prime_factor_count 51242183 3 the n input value must be odd so that we can skip one element ie i 2 this condition checks the prime number n is greater than 2 the number of distinct prime factors is are 3 the value of log log n is 2 8765
import math def exact_prime_factor_count(n: int) -> int: count = 0 if n % 2 == 0: count += 1 while n % 2 == 0: n = int(n / 2) i = 3 while i <= int(math.sqrt(n)): if n % i == 0: count += 1 while n % i == 0: n = int(n / i) i = i + 2 if n > 2: count += 1 return count if __name__ == "__main__": n = 51242183 print(f"The number of distinct prime factors is/are {exact_prime_factor_count(n)}") print(f"The value of log(log(n)) is {math.log(math.log(n)):.4f}")
integer square root algorithm an efficient method to calculate the square root of a nonnegative integer num rounded down to the nearest integer it uses a binary search approach to find the integer square root without using any builtin exponent functions or operators https en wikipedia orgwikiintegersquareroot https docs python org3librarymath htmlmath isqrt note this algorithm is designed for nonnegative integers only the result is rounded down to the nearest integer the algorithm has a time complexity of ologx original algorithm idea based on binary search returns the integer square root of a nonnegative integer num args num a nonnegative integer returns the integer square root of num raises valueerror if num is not an integer or is negative integersquarerooti for i in range18 0 1 1 1 2 2 2 2 2 3 3 3 3 3 3 3 4 4 integersquareroot625 25 integersquareroot2147483647 46340 from math import isqrt allintegersquarerooti isqrti for i in range20 true integersquareroot1 traceback most recent call last valueerror num must be nonnegative integer integersquareroot1 5 traceback most recent call last valueerror num must be nonnegative integer integersquareroot0 traceback most recent call last valueerror num must be nonnegative integer returns the integer square root of a non negative integer num args num a non negative integer returns the integer square root of num raises valueerror if num is not an integer or is negative integer_square_root i for i in range 18 0 1 1 1 2 2 2 2 2 3 3 3 3 3 3 3 4 4 integer_square_root 625 25 integer_square_root 2_147_483_647 46340 from math import isqrt all integer_square_root i isqrt i for i in range 20 true integer_square_root 1 traceback most recent call last valueerror num must be non negative integer integer_square_root 1 5 traceback most recent call last valueerror num must be non negative integer integer_square_root 0 traceback most recent call last valueerror num must be non negative integer
def integer_square_root(num: int) -> int: if not isinstance(num, int) or num < 0: raise ValueError("num must be non-negative integer") if num < 2: return num left_bound = 0 right_bound = num // 2 while left_bound <= right_bound: mid = left_bound + (right_bound - left_bound) // 2 mid_squared = mid * mid if mid_squared == num: return mid if mid_squared < num: left_bound = mid + 1 else: right_bound = mid - 1 return right_bound if __name__ == "__main__": import doctest doctest.testmod()
an implementation of interquartile range iqr which is a measure of statistical dispersion which is the spread of the data the function takes the list of numeric values as input and returns the iqr script inspired by this wikipedia article https en wikipedia orgwikiinterquartilerange this is the implementation of the median param nums the list of numeric nums return median of the list findmediannums1 2 2 3 4 2 findmediannums1 2 2 3 4 4 2 5 findmediannums1 2 0 3 4 4 1 5 findmediannums1 1 2 2 2 3 3 4 4 4 2 65 return the interquartile range for a list of numeric values param nums the list of numeric values return interquartile range interquartilerangenums4 1 2 3 2 2 0 interquartilerangenums 2 7 10 9 8 4 67 45 17 0 interquartilerangenums 2 1 7 1 10 1 9 1 8 1 4 1 67 1 45 1 17 2 interquartilerangenums 0 0 0 0 0 0 0 interquartilerangenums traceback most recent call last valueerror the list is empty provide a nonempty list this is the implementation of the median param nums the list of numeric nums return median of the list find_median nums 1 2 2 3 4 2 find_median nums 1 2 2 3 4 4 2 5 find_median nums 1 2 0 3 4 4 1 5 find_median nums 1 1 2 2 2 3 3 4 4 4 2 65 return the interquartile range for a list of numeric values param nums the list of numeric values return interquartile range interquartile_range nums 4 1 2 3 2 2 0 interquartile_range nums 2 7 10 9 8 4 67 45 17 0 interquartile_range nums 2 1 7 1 10 1 9 1 8 1 4 1 67 1 45 1 17 2 interquartile_range nums 0 0 0 0 0 0 0 interquartile_range nums traceback most recent call last valueerror the list is empty provide a non empty list
from __future__ import annotations def find_median(nums: list[int | float]) -> float: div, mod = divmod(len(nums), 2) if mod: return nums[div] return (nums[div] + nums[(div) - 1]) / 2 def interquartile_range(nums: list[int | float]) -> float: if not nums: raise ValueError("The list is empty. Provide a non-empty list.") nums.sort() length = len(nums) div, mod = divmod(length, 2) q1 = find_median(nums[:div]) half_length = sum((div, mod)) q3 = find_median(nums[half_length:length]) return q3 - q1 if __name__ == "__main__": import doctest doctest.testmod()
returns whether num is a palindrome or not see for reference https en wikipedia orgwikipalindromicnumber isintpalindrome121 false isintpalindrome0 true isintpalindrome10 false isintpalindrome11 true isintpalindrome101 true isintpalindrome120 false returns whether num is a palindrome or not see for reference https en wikipedia org wiki palindromic_number is_int_palindrome 121 false is_int_palindrome 0 true is_int_palindrome 10 false is_int_palindrome 11 true is_int_palindrome 101 true is_int_palindrome 120 false
def is_int_palindrome(num: int) -> bool: if num < 0: return False num_copy: int = num rev_num: int = 0 while num > 0: rev_num = rev_num * 10 + (num % 10) num //= 10 return num_copy == rev_num if __name__ == "__main__": import doctest doctest.testmod()
is ip v4 address valid a valid ip address must be four octets in the form of a b c d where a b c and d are numbers from 0254 for example 192 168 23 1 172 254 254 254 are valid ip address 192 168 255 0 255 192 3 121 are invalid ip address print valid ip address if ip is valid or print invalid ip address if ip is invalid isipv4addressvalid192 168 0 23 true isipv4addressvalid192 255 15 8 false isipv4addressvalid172 100 0 8 true isipv4addressvalid254 255 0 255 false isipv4addressvalid1 2 33333333 4 false isipv4addressvalid1 2 3 4 false isipv4addressvalid1 2 3 false isipv4addressvalid1 2 3 4 5 false isipv4addressvalid1 2 a 4 false isipv4addressvalid0 0 0 0 true isipv4addressvalid1 2 3 false print valid ip address if ip is valid or print invalid ip address if ip is invalid is_ip_v4_address_valid 192 168 0 23 true is_ip_v4_address_valid 192 255 15 8 false is_ip_v4_address_valid 172 100 0 8 true is_ip_v4_address_valid 254 255 0 255 false is_ip_v4_address_valid 1 2 33333333 4 false is_ip_v4_address_valid 1 2 3 4 false is_ip_v4_address_valid 1 2 3 false is_ip_v4_address_valid 1 2 3 4 5 false is_ip_v4_address_valid 1 2 a 4 false is_ip_v4_address_valid 0 0 0 0 true is_ip_v4_address_valid 1 2 3 false
def is_ip_v4_address_valid(ip_v4_address: str) -> bool: octets = [int(i) for i in ip_v4_address.split(".") if i.isdigit()] return len(octets) == 4 and all(0 <= int(octet) <= 254 for octet in octets) if __name__ == "__main__": ip = input().strip() valid_or_invalid = "valid" if is_ip_v4_address_valid(ip) else "invalid" print(f"{ip} is a {valid_or_invalid} IP v4 address.")
references wikipedia square free number psfblack true ruff true doctest normalizewhitespace this functions takes a list of prime factors as input returns true if the factors are square free issquarefree1 1 2 3 4 false these are wrong but should return some value it simply checks for repetition in the numbers issquarefree1 3 4 sd 0 0 true issquarefree1 0 5 2 0 0 true issquarefree1 2 2 5 false issquarefree asd true issquarefree24 traceback most recent call last typeerror int object is not iterable doctest normalize_whitespace this functions takes a list of prime factors as input returns true if the factors are square free is_square_free 1 1 2 3 4 false these are wrong but should return some value it simply checks for repetition in the numbers is_square_free 1 3 4 sd 0 0 true is_square_free 1 0 5 2 0 0 true is_square_free 1 2 2 5 false is_square_free asd true is_square_free 24 traceback most recent call last typeerror int object is not iterable
from __future__ import annotations def is_square_free(factors: list[int]) -> bool: return len(set(factors)) == len(factors) if __name__ == "__main__": import doctest doctest.testmod()
the jaccard similarity coefficient is a commonly used indicator of the similarity between two sets let u be a set and a and b be subsets of u then the jaccard indexsimilarity is defined to be the ratio of the number of elements of their intersection and the number of elements of their union inspired from wikipedia and the book mining of massive datasets mmds 2nd edition chapter 3 https en wikipedia orgwikijaccardindex https mmds org jaccard similarity is widely used with minhashing finds the jaccard similarity between two sets essentially its intersection over union the alternative way to calculate this is to take union as sum of the number of items in the two sets this will lead to jaccard similarity of a set with itself be 12 instead of 1 mmds 2nd edition page 77 parameters seta set list tuple a nonempty setlist setb set list tuple a nonempty setlist alternativeunion boolean if true use sum of number of items as union output float the jaccard similarity between the two sets examples seta a b c d e setb c d e f h i jaccardsimilarityseta setb 0 375 jaccardsimilarityseta seta 1 0 jaccardsimilarityseta seta true 0 5 seta a b c d e setb c d e f h i jaccardsimilarityseta setb 0 375 seta c d e f h i setb a b c d e jaccardsimilarityseta setb 0 375 seta c d e f h i setb a b c d jaccardsimilarityseta setb true 0 2 seta a b setb c d jaccardsimilarityseta setb traceback most recent call last valueerror set a and b must either both be sets or be either a list or a tuple cast seta to list because tuples cannot be mutated finds the jaccard similarity between two sets essentially its intersection over union the alternative way to calculate this is to take union as sum of the number of items in the two sets this will lead to jaccard similarity of a set with itself be 1 2 instead of 1 mmds 2nd edition page 77 parameters set_a set list tuple a non empty set list set_b set list tuple a non empty set list alternativeunion boolean if true use sum of number of items as union output float the jaccard similarity between the two sets examples set_a a b c d e set_b c d e f h i jaccard_similarity set_a set_b 0 375 jaccard_similarity set_a set_a 1 0 jaccard_similarity set_a set_a true 0 5 set_a a b c d e set_b c d e f h i jaccard_similarity set_a set_b 0 375 set_a c d e f h i set_b a b c d e jaccard_similarity set_a set_b 0 375 set_a c d e f h i set_b a b c d jaccard_similarity set_a set_b true 0 2 set_a a b set_b c d jaccard_similarity set_a set_b traceback most recent call last valueerror set a and b must either both be sets or be either a list or a tuple cast set_a to list because tuples cannot be mutated
def jaccard_similarity( set_a: set[str] | list[str] | tuple[str], set_b: set[str] | list[str] | tuple[str], alternative_union=False, ): if isinstance(set_a, set) and isinstance(set_b, set): intersection_length = len(set_a.intersection(set_b)) if alternative_union: union_length = len(set_a) + len(set_b) else: union_length = len(set_a.union(set_b)) return intersection_length / union_length elif isinstance(set_a, (list, tuple)) and isinstance(set_b, (list, tuple)): intersection = [element for element in set_a if element in set_b] if alternative_union: return len(intersection) / (len(set_a) + len(set_b)) else: union = list(set_a) + [element for element in set_b if element not in set_a] return len(intersection) / len(union) raise ValueError( "Set a and b must either both be sets or be either a list or a tuple." ) if __name__ == "__main__": set_a = {"a", "b", "c", "d", "e"} set_b = {"c", "d", "e", "f", "h", "i"} print(jaccard_similarity(set_a, set_b))
calculate joint probability distribution https en wikipedia orgwikijointprobabilitydistribution jointdistribution jointprobabilitydistribution 1 2 2 5 8 0 7 0 3 0 3 0 5 0 2 from math import isclose isclosejointdistribution pop1 8 0 14 true jointdistribution 1 2 0 21 1 5 0 35 2 2 0 09 2 5 0 15 2 8 0 06 function to calculate the expectation mean from math import isclose iscloseexpectation1 2 0 7 0 3 1 3 true function to calculate the variance from math import isclose isclosevariance1 2 0 7 0 3 0 21 true function to calculate the covariance covariance1 2 2 5 8 0 7 0 3 0 3 0 5 0 2 2 7755575615628914e17 function to calculate the standard deviation standarddeviation0 21 0 458257569495584 input values for x and y convert input values to integers input probabilities for x and y convert input probabilities to floats calculate the joint probability distribution print the joint probability distribution joint_distribution joint_probability_distribution 1 2 2 5 8 0 7 0 3 0 3 0 5 0 2 from math import isclose isclose joint_distribution pop 1 8 0 14 true joint_distribution 1 2 0 21 1 5 0 35 2 2 0 09 2 5 0 15 2 8 0 06 function to calculate the expectation mean from math import isclose isclose expectation 1 2 0 7 0 3 1 3 true function to calculate the variance from math import isclose isclose variance 1 2 0 7 0 3 0 21 true function to calculate the covariance covariance 1 2 2 5 8 0 7 0 3 0 3 0 5 0 2 2 7755575615628914e 17 function to calculate the standard deviation standard_deviation 0 21 0 458257569495584 input values for x and y convert input values to integers input probabilities for x and y convert input probabilities to floats calculate the joint probability distribution print the joint probability distribution
def joint_probability_distribution( x_values: list[int], y_values: list[int], x_probabilities: list[float], y_probabilities: list[float], ) -> dict: return { (x, y): x_prob * y_prob for x, x_prob in zip(x_values, x_probabilities) for y, y_prob in zip(y_values, y_probabilities) } def expectation(values: list, probabilities: list) -> float: return sum(x * p for x, p in zip(values, probabilities)) def variance(values: list[int], probabilities: list[float]) -> float: mean = expectation(values, probabilities) return sum((x - mean) ** 2 * p for x, p in zip(values, probabilities)) def covariance( x_values: list[int], y_values: list[int], x_probabilities: list[float], y_probabilities: list[float], ) -> float: mean_x = expectation(x_values, x_probabilities) mean_y = expectation(y_values, y_probabilities) return sum( (x - mean_x) * (y - mean_y) * px * py for x, px in zip(x_values, x_probabilities) for y, py in zip(y_values, y_probabilities) ) def standard_deviation(variance: float) -> float: return variance**0.5 if __name__ == "__main__": from doctest import testmod testmod() x_vals = input("Enter values of X separated by spaces: ").split() y_vals = input("Enter values of Y separated by spaces: ").split() x_values = [int(x) for x in x_vals] y_values = [int(y) for y in y_vals] x_probs = input("Enter probabilities for X separated by spaces: ").split() y_probs = input("Enter probabilities for Y separated by spaces: ").split() assert len(x_values) == len(x_probs) assert len(y_values) == len(y_probs) x_probabilities = [float(p) for p in x_probs] y_probabilities = [float(p) for p in y_probs] jpd = joint_probability_distribution( x_values, y_values, x_probabilities, y_probabilities ) print( "\n".join( f"P(X={x}, Y={y}) = {probability}" for (x, y), probability in jpd.items() ) ) mean_xy = expectation( [x * y for x in x_values for y in y_values], [px * py for px in x_probabilities for py in y_probabilities], ) print(f"x mean: {expectation(x_values, x_probabilities) = }") print(f"y mean: {expectation(y_values, y_probabilities) = }") print(f"xy mean: {mean_xy}") print(f"x: {variance(x_values, x_probabilities) = }") print(f"y: {variance(y_values, y_probabilities) = }") print(f"{covariance(x_values, y_values, x_probabilities, y_probabilities) = }") print(f"x: {standard_deviation(variance(x_values, x_probabilities)) = }") print(f"y: {standard_deviation(variance(y_values, y_probabilities)) = }")
the josephus problem is a famous theoretical problem related to a certain countingout game this module provides functions to solve the josephus problem for numpeople and a stepsize the josephus problem is defined as follows numpeople are standing in a circle starting with a specified person you count around the circle skipping a fixed number of people stepsize the person at which you stop counting is eliminated from the circle the counting continues until only one person remains for more information about the josephus problem refer to https en wikipedia orgwikijosephusproblem solve the josephus problem for numpeople and a stepsize recursively args numpeople a positive integer representing the number of people stepsize a positive integer representing the step size for elimination returns the position of the last person remaining raises valueerror if numpeople or stepsize is not a positive integer examples josephusrecursive7 3 3 josephusrecursive10 2 4 josephusrecursive0 2 traceback most recent call last valueerror numpeople or stepsize is not a positive integer josephusrecursive1 9 2 traceback most recent call last valueerror numpeople or stepsize is not a positive integer josephusrecursive2 2 traceback most recent call last valueerror numpeople or stepsize is not a positive integer josephusrecursive7 0 traceback most recent call last valueerror numpeople or stepsize is not a positive integer josephusrecursive7 2 traceback most recent call last valueerror numpeople or stepsize is not a positive integer josephusrecursive1000 0 01 traceback most recent call last valueerror numpeople or stepsize is not a positive integer josephusrecursivecat dog traceback most recent call last valueerror numpeople or stepsize is not a positive integer find the winner of the josephus problem for numpeople and a stepsize args numpeople int number of people stepsize int step size for elimination returns int the position of the last person remaining 1based index examples findwinner7 3 4 findwinner10 2 5 solve the josephus problem for numpeople and a stepsize iteratively args numpeople int the number of people in the circle stepsize int the number of steps to take before eliminating someone returns int the position of the last person standing examples josephusiterative5 2 3 josephusiterative7 3 4 solve the josephus problem for num_people and a step_size recursively args num_people a positive integer representing the number of people step_size a positive integer representing the step size for elimination returns the position of the last person remaining raises valueerror if num_people or step_size is not a positive integer examples josephus_recursive 7 3 3 josephus_recursive 10 2 4 josephus_recursive 0 2 traceback most recent call last valueerror num_people or step_size is not a positive integer josephus_recursive 1 9 2 traceback most recent call last valueerror num_people or step_size is not a positive integer josephus_recursive 2 2 traceback most recent call last valueerror num_people or step_size is not a positive integer josephus_recursive 7 0 traceback most recent call last valueerror num_people or step_size is not a positive integer josephus_recursive 7 2 traceback most recent call last valueerror num_people or step_size is not a positive integer josephus_recursive 1_000 0 01 traceback most recent call last valueerror num_people or step_size is not a positive integer josephus_recursive cat dog traceback most recent call last valueerror num_people or step_size is not a positive integer find the winner of the josephus problem for num_people and a step_size args num_people int number of people step_size int step size for elimination returns int the position of the last person remaining 1 based index examples find_winner 7 3 4 find_winner 10 2 5 solve the josephus problem for num_people and a step_size iteratively args num_people int the number of people in the circle step_size int the number of steps to take before eliminating someone returns int the position of the last person standing examples josephus_iterative 5 2 3 josephus_iterative 7 3 4
def josephus_recursive(num_people: int, step_size: int) -> int: if ( not isinstance(num_people, int) or not isinstance(step_size, int) or num_people <= 0 or step_size <= 0 ): raise ValueError("num_people or step_size is not a positive integer.") if num_people == 1: return 0 return (josephus_recursive(num_people - 1, step_size) + step_size) % num_people def find_winner(num_people: int, step_size: int) -> int: return josephus_recursive(num_people, step_size) + 1 def josephus_iterative(num_people: int, step_size: int) -> int: circle = list(range(1, num_people + 1)) current = 0 while len(circle) > 1: current = (current + step_size - 1) % len(circle) circle.pop(current) return circle[0] if __name__ == "__main__": import doctest doctest.testmod()
juggler sequence juggler sequence start with any positive integer n the next term is obtained as follows if n term is even the next term is floor value of square root of n if n is odd the next term is floor value of 3 time the square root of n https en wikipedia orgwikijugglersequence akshay dubey https github comitsakshaydubey jugglersequence0 traceback most recent call last valueerror input value of number0 must be a positive integer jugglersequence1 1 jugglersequence2 2 1 jugglersequence3 3 5 11 36 6 2 1 jugglersequence5 5 11 36 6 2 1 jugglersequence10 10 3 5 11 36 6 2 1 jugglersequence25 25 125 1397 52214 228 15 58 7 18 4 2 1 jugglersequence6 0 traceback most recent call last typeerror input value of number6 0 must be an integer jugglersequence1 traceback most recent call last valueerror input value of number1 must be a positive integer akshay dubey https github com itsakshaydubey juggler_sequence 0 traceback most recent call last valueerror input value of number 0 must be a positive integer juggler_sequence 1 1 juggler_sequence 2 2 1 juggler_sequence 3 3 5 11 36 6 2 1 juggler_sequence 5 5 11 36 6 2 1 juggler_sequence 10 10 3 5 11 36 6 2 1 juggler_sequence 25 25 125 1397 52214 228 15 58 7 18 4 2 1 juggler_sequence 6 0 traceback most recent call last typeerror input value of number 6 0 must be an integer juggler_sequence 1 traceback most recent call last valueerror input value of number 1 must be a positive integer
import math def juggler_sequence(number: int) -> list[int]: if not isinstance(number, int): msg = f"Input value of [number={number}] must be an integer" raise TypeError(msg) if number < 1: msg = f"Input value of [number={number}] must be a positive integer" raise ValueError(msg) sequence = [number] while number != 1: if number % 2 == 0: number = math.floor(math.sqrt(number)) else: number = math.floor( math.sqrt(number) * math.sqrt(number) * math.sqrt(number) ) sequence.append(number) return sequence if __name__ == "__main__": import doctest doctest.testmod()
multiply two numbers using karatsuba algorithm def karatsubaa int b int int if lenstra 1 or lenstrb 1 return a b m1 maxlenstra lenstrb m2 m1 2 a1 a2 divmoda 10m2 b1 b2 divmodb 10m2 x karatsubaa2 b2 y karatsubaa1 a2 b1 b2 z karatsubaa1 b1 return z 10 2 m2 y z x 10 m2 x def main printkaratsuba15463 23489 if name main main karatsuba 15463 23489 15463 23489 true karatsuba 3 9 3 9 true
def karatsuba(a: int, b: int) -> int: if len(str(a)) == 1 or len(str(b)) == 1: return a * b m1 = max(len(str(a)), len(str(b))) m2 = m1 // 2 a1, a2 = divmod(a, 10**m2) b1, b2 = divmod(b, 10**m2) x = karatsuba(a2, b2) y = karatsuba((a1 + a2), (b1 + b2)) z = karatsuba(a1, b1) return (z * 10 ** (2 * m2)) + ((y - z - x) * 10 ** (m2)) + (x) def main(): print(karatsuba(15463, 23489)) if __name__ == "__main__": main()
finds k th lexicographic permutation in increasing order of 0 1 2 n1 in on2 time examples first permutation is always 0 1 2 n kthpermutation0 5 0 1 2 3 4 the order of permutation of 0 1 2 3 is 0 1 2 3 0 1 3 2 0 2 1 3 0 2 3 1 0 3 1 2 0 3 2 1 1 0 2 3 1 0 3 2 1 2 0 3 1 2 3 0 1 3 0 2 kthpermutation10 4 1 3 0 2 factorails from 1 to n1 find permutation finds k th lexicographic permutation in increasing order of 0 1 2 n 1 in o n 2 time examples first permutation is always 0 1 2 n kth_permutation 0 5 0 1 2 3 4 the order of permutation of 0 1 2 3 is 0 1 2 3 0 1 3 2 0 2 1 3 0 2 3 1 0 3 1 2 0 3 2 1 1 0 2 3 1 0 3 2 1 2 0 3 1 2 3 0 1 3 0 2 kth_permutation 10 4 1 3 0 2 factorails from 1 to n 1 find permutation
def kth_permutation(k, n): factorials = [1] for i in range(2, n): factorials.append(factorials[-1] * i) assert 0 <= k < factorials[-1] * n, "k out of bounds" permutation = [] elements = list(range(n)) while factorials: factorial = factorials.pop() number, k = divmod(k, factorial) permutation.append(elements[number]) elements.remove(elements[number]) permutation.append(elements[0]) return permutation if __name__ == "__main__": import doctest doctest.testmod()
abhijeeth s reduces large number to a more manageable number res5 7 4 892790030352132 res0 5 0 res3 0 1 res1 5 traceback most recent call last valueerror math domain error we use the relation xy ylog10x where 10 is the base read two numbers from input and typecast them to int using map function here x is the base and y is the power we find the log of each number using the function res which takes two arguments we check for the largest number abhijeeth s reduces large number to a more manageable number res 5 7 4 892790030352132 res 0 5 0 res 3 0 1 res 1 5 traceback most recent call last valueerror math domain error we use the relation x y y log10 x where 10 is the base 0 raised to any number is 0 any number raised to 0 is 1 main function read two numbers from input and typecast them to int using map function here x is the base and y is the power we find the log of each number using the function res which takes two arguments we check for the largest number
import math def res(x, y): if 0 not in (x, y): return y * math.log10(x) else: if x == 0: return 0 elif y == 0: return 1 raise AssertionError("This should never happen") if __name__ == "__main__": prompt = "Enter the base and the power separated by a comma: " x1, y1 = map(int, input(prompt).split(",")) x2, y2 = map(int, input(prompt).split(",")) res1 = res(x1, y1) res2 = res(x2, y2) if res1 > res2: print("Largest number is", x1, "^", y1) elif res2 > res1: print("Largest number is", x2, "^", y2) else: print("Both are equal")
find the least common multiple of two numbers learn more https en wikipedia orgwikileastcommonmultiple leastcommonmultipleslow5 2 10 leastcommonmultipleslow12 76 228 find the least common multiple of two numbers https en wikipedia orgwikileastcommonmultipleusingthegreatestcommondivisor leastcommonmultiplefast5 2 10 leastcommonmultiplefast12 76 228 find the least common multiple of two numbers learn more https en wikipedia org wiki least_common_multiple least_common_multiple_slow 5 2 10 least_common_multiple_slow 12 76 228 find the least common multiple of two numbers https en wikipedia org wiki least_common_multiple using_the_greatest_common_divisor least_common_multiple_fast 5 2 10 least_common_multiple_fast 12 76 228
import unittest from timeit import timeit from maths.greatest_common_divisor import greatest_common_divisor def least_common_multiple_slow(first_num: int, second_num: int) -> int: max_num = first_num if first_num >= second_num else second_num common_mult = max_num while (common_mult % first_num > 0) or (common_mult % second_num > 0): common_mult += max_num return common_mult def least_common_multiple_fast(first_num: int, second_num: int) -> int: return first_num // greatest_common_divisor(first_num, second_num) * second_num def benchmark(): setup = ( "from __main__ import least_common_multiple_slow, least_common_multiple_fast" ) print( "least_common_multiple_slow():", timeit("least_common_multiple_slow(1000, 999)", setup=setup), ) print( "least_common_multiple_fast():", timeit("least_common_multiple_fast(1000, 999)", setup=setup), ) class TestLeastCommonMultiple(unittest.TestCase): test_inputs = ( (10, 20), (13, 15), (4, 31), (10, 42), (43, 34), (5, 12), (12, 25), (10, 25), (6, 9), ) expected_results = (20, 195, 124, 210, 1462, 60, 300, 50, 18) def test_lcm_function(self): for i, (first_num, second_num) in enumerate(self.test_inputs): slow_result = least_common_multiple_slow(first_num, second_num) fast_result = least_common_multiple_fast(first_num, second_num) with self.subTest(i=i): assert slow_result == self.expected_results[i] assert fast_result == self.expected_results[i] if __name__ == "__main__": benchmark() unittest.main()
approximates the arc length of a line segment by treating the curve as a sequence of linear lines and summing their lengths param fnc a function which defines a curve param xstart left end point to indicate the start of line segment param xend right end point to indicate end of line segment param steps an accuracy gauge more steps increases accuracy return a float representing the length of the curve def fx return x flinelengthf 0 1 10 6f 1 414214 def fx return 1 flinelengthf 5 5 4 5 6f 10 000000 def fx return math sin5 x math cos10 x x x10 flinelengthf 0 0 10 0 10000 6f 69 534930 approximates curve as a sequence of linear lines and sums their length increment step approximates the arc length of a line segment by treating the curve as a sequence of linear lines and summing their lengths param fnc a function which defines a curve param x_start left end point to indicate the start of line segment param x_end right end point to indicate end of line segment param steps an accuracy gauge more steps increases accuracy return a float representing the length of the curve def f x return x f line_length f 0 1 10 6f 1 414214 def f x return 1 f line_length f 5 5 4 5 6f 10 000000 def f x return math sin 5 x math cos 10 x x x 10 f line_length f 0 0 10 0 10000 6f 69 534930 approximates curve as a sequence of linear lines and sums their length increment step
from __future__ import annotations import math from collections.abc import Callable def line_length( fnc: Callable[[float], float], x_start: float, x_end: float, steps: int = 100, ) -> float: x1 = x_start fx1 = fnc(x_start) length = 0.0 for _ in range(steps): x2 = (x_end - x_start) / steps + x1 fx2 = fnc(x2) length += math.hypot(x2 - x1, fx2 - fx1) x1 = x2 fx1 = fx2 return length if __name__ == "__main__": def f(x): return math.sin(10 * x) print("f(x) = sin(10 * x)") print("The length of the curve from x = -10 to x = 10 is:") i = 10 while i <= 100000: print(f"With {i} steps: {line_length(f, -10, 10, i)}") i *= 10
liouville lambda function the liouville lambda function denoted by n and n is 1 if n is the product of an even number of prime numbers and 1 if it is the product of an odd number of primes https en wikipedia orgwikiliouvillefunction akshay dubey https github comitsakshaydubey this functions takes an integer number as input returns 1 if n has even number of prime factors and 1 otherwise liouvillelambda10 1 liouvillelambda11 1 liouvillelambda0 traceback most recent call last valueerror input must be a positive integer liouvillelambda1 traceback most recent call last valueerror input must be a positive integer liouvillelambda11 0 traceback most recent call last typeerror input value of number11 0 must be an integer akshay dubey https github com itsakshaydubey this functions takes an integer number as input returns 1 if n has even number of prime factors and 1 otherwise liouville_lambda 10 1 liouville_lambda 11 1 liouville_lambda 0 traceback most recent call last valueerror input must be a positive integer liouville_lambda 1 traceback most recent call last valueerror input must be a positive integer liouville_lambda 11 0 traceback most recent call last typeerror input value of number 11 0 must be an integer
from maths.prime_factors import prime_factors def liouville_lambda(number: int) -> int: if not isinstance(number, int): msg = f"Input value of [number={number}] must be an integer" raise TypeError(msg) if number < 1: raise ValueError("Input must be a positive integer") return -1 if len(prime_factors(number)) % 2 else 1 if __name__ == "__main__": import doctest doctest.testmod()
in mathematics the lucaslehmer test llt is a primality test for mersenne numbers https en wikipedia orgwikilucase28093lehmerprimalitytest a mersenne number is a number that is one less than a power of two that is mp 2p 1 https en wikipedia orgwikimersenneprime the lucaslehmer test is the primality test used by the great internet mersenne prime search gimps to locate large primes primality test 2p 1 return true if 2p 1 is prime lucaslehmertestp7 true lucaslehmertestp11 false m11 211 1 2047 23 89 primality test 2 p 1 return true if 2 p 1 is prime lucas_lehmer_test p 7 true lucas_lehmer_test p 11 false m_11 2 11 1 2047 23 89
def lucas_lehmer_test(p: int) -> bool: if p < 2: raise ValueError("p should not be less than 2!") elif p == 2: return True s = 4 m = (1 << p) - 1 for _ in range(p - 2): s = ((s * s) - 2) % m return s == 0 if __name__ == "__main__": print(lucas_lehmer_test(7)) print(lucas_lehmer_test(11))
https en wikipedia orgwikilucasnumber returns the nth lucas number recursivelucasnumber1 1 recursivelucasnumber20 15127 recursivelucasnumber0 2 recursivelucasnumber25 167761 recursivelucasnumber1 5 traceback most recent call last typeerror recursivelucasnumber accepts only integer arguments returns the nth lucas number dynamiclucasnumber1 1 dynamiclucasnumber20 15127 dynamiclucasnumber0 2 dynamiclucasnumber25 167761 dynamiclucasnumber1 5 traceback most recent call last typeerror dynamiclucasnumber accepts only integer arguments returns the nth lucas number recursive_lucas_number 1 1 recursive_lucas_number 20 15127 recursive_lucas_number 0 2 recursive_lucas_number 25 167761 recursive_lucas_number 1 5 traceback most recent call last typeerror recursive_lucas_number accepts only integer arguments returns the nth lucas number dynamic_lucas_number 1 1 dynamic_lucas_number 20 15127 dynamic_lucas_number 0 2 dynamic_lucas_number 25 167761 dynamic_lucas_number 1 5 traceback most recent call last typeerror dynamic_lucas_number accepts only integer arguments
def recursive_lucas_number(n_th_number: int) -> int: if not isinstance(n_th_number, int): raise TypeError("recursive_lucas_number accepts only integer arguments.") if n_th_number == 0: return 2 if n_th_number == 1: return 1 return recursive_lucas_number(n_th_number - 1) + recursive_lucas_number( n_th_number - 2 ) def dynamic_lucas_number(n_th_number: int) -> int: if not isinstance(n_th_number, int): raise TypeError("dynamic_lucas_number accepts only integer arguments.") a, b = 2, 1 for _ in range(n_th_number): a, b = b, a + b return a if __name__ == "__main__": from doctest import testmod testmod() n = int(input("Enter the number of terms in lucas series:\n").strip()) print("Using recursive function to calculate lucas series:") print(" ".join(str(recursive_lucas_number(i)) for i in range(n))) print("\nUsing dynamic function to calculate lucas series:") print(" ".join(str(dynamic_lucas_number(i)) for i in range(n)))
https en wikipedia orgwikitaylorseriestrigonometricfunctions finds the maclaurin approximation of sin param theta the angle to which sin is found param accuracy the degree of accuracy wanted minimum return the value of sine in radians from math import isclose sin allisclosemaclaurinsinx 50 sinx for x in range25 25 true maclaurinsin10 0 5440211108893691 maclaurinsin10 0 5440211108893704 maclaurinsin10 15 0 544021110889369 maclaurinsin10 15 0 5440211108893704 maclaurinsin10 traceback most recent call last valueerror maclaurinsin requires either an int or float for theta maclaurinsin10 30 traceback most recent call last valueerror maclaurinsin requires a positive int for accuracy maclaurinsin10 30 5 traceback most recent call last valueerror maclaurinsin requires a positive int for accuracy maclaurinsin10 30 traceback most recent call last valueerror maclaurinsin requires a positive int for accuracy finds the maclaurin approximation of cos param theta the angle to which cos is found param accuracy the degree of accuracy wanted return the value of cosine in radians from math import isclose cos allisclosemaclaurincosx 50 cosx for x in range25 25 true maclaurincos5 0 2836621854632268 maclaurincos5 0 2836621854632265 maclaurincos10 15 0 8390715290764524 maclaurincos10 15 0 8390715290764521 maclaurincos10 traceback most recent call last valueerror maclaurincos requires either an int or float for theta maclaurincos10 30 traceback most recent call last valueerror maclaurincos requires a positive int for accuracy maclaurincos10 30 5 traceback most recent call last valueerror maclaurincos requires a positive int for accuracy maclaurincos10 30 traceback most recent call last valueerror maclaurincos requires a positive int for accuracy finds the maclaurin approximation of sin param theta the angle to which sin is found param accuracy the degree of accuracy wanted minimum return the value of sine in radians from math import isclose sin all isclose maclaurin_sin x 50 sin x for x in range 25 25 true maclaurin_sin 10 0 5440211108893691 maclaurin_sin 10 0 5440211108893704 maclaurin_sin 10 15 0 544021110889369 maclaurin_sin 10 15 0 5440211108893704 maclaurin_sin 10 traceback most recent call last valueerror maclaurin_sin requires either an int or float for theta maclaurin_sin 10 30 traceback most recent call last valueerror maclaurin_sin requires a positive int for accuracy maclaurin_sin 10 30 5 traceback most recent call last valueerror maclaurin_sin requires a positive int for accuracy maclaurin_sin 10 30 traceback most recent call last valueerror maclaurin_sin requires a positive int for accuracy finds the maclaurin approximation of cos param theta the angle to which cos is found param accuracy the degree of accuracy wanted return the value of cosine in radians from math import isclose cos all isclose maclaurin_cos x 50 cos x for x in range 25 25 true maclaurin_cos 5 0 2836621854632268 maclaurin_cos 5 0 2836621854632265 maclaurin_cos 10 15 0 8390715290764524 maclaurin_cos 10 15 0 8390715290764521 maclaurin_cos 10 traceback most recent call last valueerror maclaurin_cos requires either an int or float for theta maclaurin_cos 10 30 traceback most recent call last valueerror maclaurin_cos requires a positive int for accuracy maclaurin_cos 10 30 5 traceback most recent call last valueerror maclaurin_cos requires a positive int for accuracy maclaurin_cos 10 30 traceback most recent call last valueerror maclaurin_cos requires a positive int for accuracy
from math import factorial, pi def maclaurin_sin(theta: float, accuracy: int = 30) -> float: if not isinstance(theta, (int, float)): raise ValueError("maclaurin_sin() requires either an int or float for theta") if not isinstance(accuracy, int) or accuracy <= 0: raise ValueError("maclaurin_sin() requires a positive int for accuracy") theta = float(theta) div = theta // (2 * pi) theta -= 2 * div * pi return sum( (-1) ** r * theta ** (2 * r + 1) / factorial(2 * r + 1) for r in range(accuracy) ) def maclaurin_cos(theta: float, accuracy: int = 30) -> float: if not isinstance(theta, (int, float)): raise ValueError("maclaurin_cos() requires either an int or float for theta") if not isinstance(accuracy, int) or accuracy <= 0: raise ValueError("maclaurin_cos() requires a positive int for accuracy") theta = float(theta) div = theta // (2 * pi) theta -= 2 * div * pi return sum((-1) ** r * theta ** (2 * r) / factorial(2 * r) for r in range(accuracy)) if __name__ == "__main__": import doctest doctest.testmod() print(maclaurin_sin(10)) print(maclaurin_sin(-10)) print(maclaurin_sin(10, 15)) print(maclaurin_sin(-10, 15)) print(maclaurin_cos(5)) print(maclaurin_cos(-5)) print(maclaurin_cos(10, 15)) print(maclaurin_cos(-10, 15))
expectts two list of numbers representing two points in the same ndimensional space https en wikipedia orgwikitaxicabgeometry manhattandistance1 1 2 2 2 0 manhattandistance1 5 1 5 2 2 1 0 manhattandistance1 5 1 5 2 5 2 1 5 manhattandistance3 3 3 0 0 0 9 0 manhattandistance1 1 none traceback most recent call last valueerror missing an input manhattandistance1 1 2 2 2 traceback most recent call last valueerror both points must be in the same ndimensional space manhattandistance1 one 2 2 2 traceback most recent call last typeerror expected a list of numbers as input found str manhattandistance1 2 2 2 traceback most recent call last typeerror expected a list of numbers as input found int manhattandistance1 1 notalist traceback most recent call last typeerror expected a list of numbers as input found str validatepointnone traceback most recent call last valueerror missing an input validatepoint1 one traceback most recent call last typeerror expected a list of numbers as input found str validatepoint1 traceback most recent call last typeerror expected a list of numbers as input found int validatepointnotalist traceback most recent call last typeerror expected a list of numbers as input found str version with one liner manhattandistanceoneliner1 1 2 2 2 0 manhattandistanceoneliner1 5 1 5 2 2 1 0 manhattandistanceoneliner1 5 1 5 2 5 2 1 5 manhattandistanceoneliner3 3 3 0 0 0 9 0 manhattandistanceoneliner1 1 none traceback most recent call last valueerror missing an input manhattandistanceoneliner1 1 2 2 2 traceback most recent call last valueerror both points must be in the same ndimensional space manhattandistanceoneliner1 one 2 2 2 traceback most recent call last typeerror expected a list of numbers as input found str manhattandistanceoneliner1 2 2 2 traceback most recent call last typeerror expected a list of numbers as input found int manhattandistanceoneliner1 1 notalist traceback most recent call last typeerror expected a list of numbers as input found str expectts two list of numbers representing two points in the same n dimensional space https en wikipedia org wiki taxicab_geometry manhattan_distance 1 1 2 2 2 0 manhattan_distance 1 5 1 5 2 2 1 0 manhattan_distance 1 5 1 5 2 5 2 1 5 manhattan_distance 3 3 3 0 0 0 9 0 manhattan_distance 1 1 none traceback most recent call last valueerror missing an input manhattan_distance 1 1 2 2 2 traceback most recent call last valueerror both points must be in the same n dimensional space manhattan_distance 1 one 2 2 2 traceback most recent call last typeerror expected a list of numbers as input found str manhattan_distance 1 2 2 2 traceback most recent call last typeerror expected a list of numbers as input found int manhattan_distance 1 1 not_a_list traceback most recent call last typeerror expected a list of numbers as input found str _validate_point none traceback most recent call last valueerror missing an input _validate_point 1 one traceback most recent call last typeerror expected a list of numbers as input found str _validate_point 1 traceback most recent call last typeerror expected a list of numbers as input found int _validate_point not_a_list traceback most recent call last typeerror expected a list of numbers as input found str version with one liner manhattan_distance_one_liner 1 1 2 2 2 0 manhattan_distance_one_liner 1 5 1 5 2 2 1 0 manhattan_distance_one_liner 1 5 1 5 2 5 2 1 5 manhattan_distance_one_liner 3 3 3 0 0 0 9 0 manhattan_distance_one_liner 1 1 none traceback most recent call last valueerror missing an input manhattan_distance_one_liner 1 1 2 2 2 traceback most recent call last valueerror both points must be in the same n dimensional space manhattan_distance_one_liner 1 one 2 2 2 traceback most recent call last typeerror expected a list of numbers as input found str manhattan_distance_one_liner 1 2 2 2 traceback most recent call last typeerror expected a list of numbers as input found int manhattan_distance_one_liner 1 1 not_a_list traceback most recent call last typeerror expected a list of numbers as input found str
def manhattan_distance(point_a: list, point_b: list) -> float: _validate_point(point_a) _validate_point(point_b) if len(point_a) != len(point_b): raise ValueError("Both points must be in the same n-dimensional space") return float(sum(abs(a - b) for a, b in zip(point_a, point_b))) def _validate_point(point: list[float]) -> None: if point: if isinstance(point, list): for item in point: if not isinstance(item, (int, float)): msg = ( "Expected a list of numbers as input, found " f"{type(item).__name__}" ) raise TypeError(msg) else: msg = f"Expected a list of numbers as input, found {type(point).__name__}" raise TypeError(msg) else: raise ValueError("Missing an input") def manhattan_distance_one_liner(point_a: list, point_b: list) -> float: _validate_point(point_a) _validate_point(point_b) if len(point_a) != len(point_b): raise ValueError("Both points must be in the same n-dimensional space") return float(sum(abs(x - y) for x, y in zip(point_a, point_b))) if __name__ == "__main__": import doctest doctest.testmod()
given an array of integer elements and an integer k we are required to find the maximum sum of k consecutive elements in the array instead of using a nested for loop in a brute force approach we will use a technique called window sliding technique where the nested loops can be converted to a single loop to reduce time complexity returns the maximum sum of k consecutive elements arr 1 4 2 10 2 3 1 0 20 k 4 maxsuminarrayarr k 24 k 10 maxsuminarrayarr k traceback most recent call last valueerror invalid input arr 1 4 2 10 2 13 1 0 2 k 4 maxsuminarrayarr k 27 returns the maximum sum of k consecutive elements arr 1 4 2 10 2 3 1 0 20 k 4 max_sum_in_array arr k 24 k 10 max_sum_in_array arr k traceback most recent call last valueerror invalid input arr 1 4 2 10 2 13 1 0 2 k 4 max_sum_in_array arr k 27
from __future__ import annotations def max_sum_in_array(array: list[int], k: int) -> int: if len(array) < k or k < 0: raise ValueError("Invalid Input") max_sum = current_sum = sum(array[:k]) for i in range(len(array) - k): current_sum = current_sum - array[i] + array[i + k] max_sum = max(max_sum, current_sum) return max_sum if __name__ == "__main__": from doctest import testmod from random import randint testmod() array = [randint(-1000, 1000) for i in range(100)] k = randint(0, 110) print(f"The maximum sum of {k} consecutive elements is {max_sum_in_array(array,k)}")
medianoftwoarrays1 2 3 2 medianoftwoarrays0 1 1 2 5 1 0 5 medianoftwoarrays 2 5 1 1 75 medianoftwoarrays 0 0 medianoftwoarrays traceback most recent call last indexerror list index out of range median_of_two_arrays 1 2 3 2 median_of_two_arrays 0 1 1 2 5 1 0 5 median_of_two_arrays 2 5 1 1 75 median_of_two_arrays 0 0 median_of_two_arrays traceback most recent call last indexerror list index out of range
from __future__ import annotations def median_of_two_arrays(nums1: list[float], nums2: list[float]) -> float: all_numbers = sorted(nums1 + nums2) div, mod = divmod(len(all_numbers), 2) if mod == 1: return all_numbers[div] else: return (all_numbers[div] + all_numbers[div - 1]) / 2 if __name__ == "__main__": import doctest doctest.testmod() array_1 = [float(x) for x in input("Enter the elements of first array: ").split()] array_2 = [float(x) for x in input("Enter the elements of second array: ").split()] print(f"The median of two arrays is: {median_of_two_arrays(array_1, array_2)}")
this function calculates the minkowski distance for a given order between two ndimensional points represented as lists for the case of order 1 the minkowski distance degenerates to the manhattan distance for order 2 the usual euclidean distance is obtained https en wikipedia orgwikiminkowskidistance note due to floating point calculation errors the output of this function may be inaccurate minkowskidistance1 0 1 0 2 0 2 0 1 2 0 minkowskidistance1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 2 8 0 import numpy as np np isclose5 0 minkowskidistance5 0 0 0 3 true minkowskidistance1 0 2 0 1 traceback most recent call last valueerror the order must be greater than or equal to 1 minkowskidistance1 0 1 0 2 0 1 traceback most recent call last valueerror both points must have the same dimension this function calculates the minkowski distance for a given order between two n dimensional points represented as lists for the case of order 1 the minkowski distance degenerates to the manhattan distance for order 2 the usual euclidean distance is obtained https en wikipedia org wiki minkowski_distance note due to floating point calculation errors the output of this function may be inaccurate minkowski_distance 1 0 1 0 2 0 2 0 1 2 0 minkowski_distance 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 2 8 0 import numpy as np np isclose 5 0 minkowski_distance 5 0 0 0 3 true minkowski_distance 1 0 2 0 1 traceback most recent call last valueerror the order must be greater than or equal to 1 minkowski_distance 1 0 1 0 2 0 1 traceback most recent call last valueerror both points must have the same dimension
def minkowski_distance( point_a: list[float], point_b: list[float], order: int, ) -> float: if order < 1: raise ValueError("The order must be greater than or equal to 1.") if len(point_a) != len(point_b): raise ValueError("Both points must have the same dimension.") return sum(abs(a - b) ** order for a, b in zip(point_a, point_b)) ** (1 / order) if __name__ == "__main__": import doctest doctest.testmod()