title
stringclasses
632 values
cleaned_text
stringclasses
607 values
text
stringclasses
632 values
Weekly Status Template
Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Swathi: !!!! Jenani: !!!! Suraj: !!!! Anees: !!!! Keerthana: !!!! Jayaprakash: !!!! Kaushik: !!!! Shaik: !!!! Jeyanth: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SANJAY RAVI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SANJAY RAVI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] JIRA TICKET(S) ON HOLD: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [KAUSHIK VISHAL SARAVANAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== * !!Brief Summary of the ticket in 2-3 points!!
###Human:tell me about Weekly Status Template ###assistant: Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Swathi: !!!! Jenani: !!!! Suraj: !!!! Anees: !!!! Keerthana: !!!! Jayaprakash: !!!! Kaushik: !!!! Shaik: !!!! Jeyanth: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SANJAY RAVI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SANJAY RAVI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] JIRA TICKET(S) ON HOLD: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [KAUSHIK VISHAL SARAVANAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== * !!Brief Summary of the ticket in 2-3 points!!
Incident Report: Compromise of Ubuntu Server on AWS
Incident Overview On June 7th, 2024, we received an alert indicating the disk usage on our AWS-hosted Ubuntu server airtel-dev-2 was conducting network scans. Subsequent investigation revealed that the server had been compromised via an outdated and publicly accessible Docker instance. Timeline and Findings 1. Observed that main disk was filling up Observed that /var/log/syslog was > 7 GB Tail the log and saw suspicious messages ex: httpd[1159514]: message repeated 18 times: [ Error running zzh command: fork/exec /usr/local/bin/pnscan: no such file or directory] Jun 7 21:40:07 airtel-dev-2 httpd[508]: Error creating output file: open .r.89.118.o: too many open files 2. Searched for httpd files 1. Did a search of the file system using 2. Found suspicious files in /etc/.httpd 1. /etc/.httpd/…/httpd 2. /etc/.httpd/…./httpd 3. Found scan results file in /etc/.httpd/…/ 1. ips_for_zgrab_120.txt 2. ips_for_zgrab_87.txt 3. scan_120.0.0.0_8.txt 4. scan_156.0.0.0_8.txt 5. scan_87.0.0.0_8.txt 4. Looked at the scan files and saw the results of scans for port 2375 5. Used the file command on the two suspicious httpd binaries and saw that they were executables written in golang 6. Used the strings command and observed the following in one of the binaries: 1. Saw that the commands untarred an executable from the base64 called vurl. This was then used to download a script whose content was attempted to be installed in cron jobs. 2. This seemed to have partially failed because the cron settings were not there when I checked for them 7. Conclusions: 1. The breach seemed to have been designed to use our server to scan other servers and to do so 3. Discovery of Installed Libraries 1. Observed that the syslog was still filling 2. Did not see any services using top that indicated that they were doing any activities 3. Observed suspicious .so files in /usr/local/lib: 1. /usr/local/lib/top.so 2. /usr/local/lib/pnscan.so 3. /usr/local/lib/masscan.so 4. /usr/local/lib/httpd.so 5. /usr/local/lib/bioset.so 4. Saw that these files were referenced in the /etc/ld.so.preload file 5. Removed the files and the /etc/ld.so.preload file 1. The initial remove attempt failed saying permission denied 2. Saw that the files had the immutable flag set using lsattr 3. Removed the immutable flag using chattr -i 4. Deleted the files 6. Restarted the server to restart all the applications running with these malicious shared libraries 7. Verified the shared libraries were no longer being used using lsof grep 4. Used top to see what was filling the logs still 1. After removing the .so files, top worked properly 2. Observed a command /var/tmp/.222/top 3. Investigated the directory and found more suspicious files 4. Used strings command to investigate the top binary and saw references to crypto currency mining 5. Tried removing the directory and killing the process 6. The process restarted 7. Searched for the top command in /etc 8. Found monitor.service file that was restarting it 9. Used systemctl to disable the service 10. Found tow other service files that had malicious scripts: 1. -rw-r--r-- 1 root root 157 Jun 7 10:22 zzhr.service 2. -rw-r--r-- 1 root root 159 Jun 7 10:22 zzhre.service 11. Disabled both of those services as well 12. The top service stopped running 5. Searched for files around the time of the attack 1. Found an a.tar file that contained the vurl command Attack Vector Entry Point: The attack vector was the Docker daemon (supposedly), which was outdated and publicly accessible, leading to exploitation. Result of the Attack 1. Installation of Scanning Software The attackers installed scanning software designed to scan for more publicly accessible Docker ports, likely to infect additional systems. 2. Additional Malicious Activities Crypto Mining: Identified binary associated with crypto mining, indicating cryptocurrency mining activities. Potential Further Activities: Other malicious activities yet to be fully detailed, likely involving the compromised system being used for further attacks or exploitation. --- Immediate Actions and Recommendations 1. Immediate Actions Taken/Need to be Taken Isolated the compromised server to prevent further damage. Conducted a thorough forensic analysis to understand the extent of the compromise. Updated Docker to the latest secure version. Reviewed and corrected Docker and server configurations to enhance security. Implemented stricter access controls and firewall rules to limit public exposure. 2. Recommendations for Future Prevention Regular Updates: Ensure regular updates and patch management for all software and services. Access Controls: Implement strict access controls and network segmentation to limit exposure. Configuration Management: Regularly review and audit service configurations. Monitoring and Logging: Enhance monitoring and logging to detect and respond to suspicious activities promptly. Security Training:** Conduct regular security training for IT staff to ensure awareness of best practices and emerging threats.
###Human:tell me about Incident Report: Compromise of Ubuntu Server on AWS ###assistant: Incident Overview On June 7th, 2024, we received an alert indicating the disk usage on our AWS-hosted Ubuntu server airtel-dev-2 was conducting network scans. Subsequent investigation revealed that the server had been compromised via an outdated and publicly accessible Docker instance. Timeline and Findings 1. Observed that main disk was filling up Observed that /var/log/syslog was > 7 GB Tail the log and saw suspicious messages ex: httpd[1159514]: message repeated 18 times: [ Error running zzh command: fork/exec /usr/local/bin/pnscan: no such file or directory] Jun 7 21:40:07 airtel-dev-2 httpd[508]: Error creating output file: open .r.89.118.o: too many open files 2. Searched for httpd files 1. Did a search of the file system using 2. Found suspicious files in /etc/.httpd 1. /etc/.httpd/…/httpd 2. /etc/.httpd/…./httpd 3. Found scan results file in /etc/.httpd/…/ 1. ips_for_zgrab_120.txt 2. ips_for_zgrab_87.txt 3. scan_120.0.0.0_8.txt 4. scan_156.0.0.0_8.txt 5. scan_87.0.0.0_8.txt 4. Looked at the scan files and saw the results of scans for port 2375 5. Used the file command on the two suspicious httpd binaries and saw that they were executables written in golang 6. Used the strings command and observed the following in one of the binaries: 1. Saw that the commands untarred an executable from the base64 called vurl. This was then used to download a script whose content was attempted to be installed in cron jobs. 2. This seemed to have partially failed because the cron settings were not there when I checked for them 7. Conclusions: 1. The breach seemed to have been designed to use our server to scan other servers and to do so 3. Discovery of Installed Libraries 1. Observed that the syslog was still filling 2. Did not see any services using top that indicated that they were doing any activities 3. Observed suspicious .so files in /usr/local/lib: 1. /usr/local/lib/top.so 2. /usr/local/lib/pnscan.so 3. /usr/local/lib/masscan.so 4. /usr/local/lib/httpd.so 5. /usr/local/lib/bioset.so 4. Saw that these files were referenced in the /etc/ld.so.preload file 5. Removed the files and the /etc/ld.so.preload file 1. The initial remove attempt failed saying permission denied 2. Saw that the files had the immutable flag set using lsattr 3. Removed the immutable flag using chattr -i 4. Deleted the files 6. Restarted the server to restart all the applications running with these malicious shared libraries 7. Verified the shared libraries were no longer being used using lsof grep 4. Used top to see what was filling the logs still 1. After removing the .so files, top worked properly 2. Observed a command /var/tmp/.222/top 3. Investigated the directory and found more suspicious files 4. Used strings command to investigate the top binary and saw references to crypto currency mining 5. Tried removing the directory and killing the process 6. The process restarted 7. Searched for the top command in /etc 8. Found monitor.service file that was restarting it 9. Used systemctl to disable the service 10. Found tow other service files that had malicious scripts: 1. -rw-r--r-- 1 root root 157 Jun 7 10:22 zzhr.service 2. -rw-r--r-- 1 root root 159 Jun 7 10:22 zzhre.service 11. Disabled both of those services as well 12. The top service stopped running 5. Searched for files around the time of the attack 1. Found an a.tar file that contained the vurl command Attack Vector Entry Point: The attack vector was the Docker daemon (supposedly), which was outdated and publicly accessible, leading to exploitation. Result of the Attack 1. Installation of Scanning Software The attackers installed scanning software designed to scan for more publicly accessible Docker ports, likely to infect additional systems. 2. Additional Malicious Activities Crypto Mining: Identified binary associated with crypto mining, indicating cryptocurrency mining activities. Potential Further Activities: Other malicious activities yet to be fully detailed, likely involving the compromised system being used for further attacks or exploitation. --- Immediate Actions and Recommendations 1. Immediate Actions Taken/Need to be Taken Isolated the compromised server to prevent further damage. Conducted a thorough forensic analysis to understand the extent of the compromise. Updated Docker to the latest secure version. Reviewed and corrected Docker and server configurations to enhance security. Implemented stricter access controls and firewall rules to limit public exposure. 2. Recommendations for Future Prevention Regular Updates: Ensure regular updates and patch management for all software and services. Access Controls: Implement strict access controls and network segmentation to limit exposure. Configuration Management: Regularly review and audit service configurations. Monitoring and Logging: Enhance monitoring and logging to detect and respond to suspicious activities promptly. Security Training:** Conduct regular security training for IT staff to ensure awareness of best practices and emerging threats.
Obstacle Detection
EXECUTIVE SUMMARY An object which causes disruption and chaos for the traffic and interfere in the flow. Provide in-vehicle warnings about upcoming obstacles for safer navigation. Examples: Road construction (barriers, block stones, block cones) Pedestrians interfere Pets interfere Vehicles interfere 2. OVERVIEW USE CASES - Autonomous Vehicle PROBLEM TO SOLVE To avoid collision and congestion int the traffic Precautions to stop before. SYSTEM INPUT Sensory inputs - camera SYSTEM OUTCOME Obstacle Detection (Identification and Localization) Tracking 3. PRE-REQUISITIES - PACKAGES Python 3.10 OpenCV Py Torch dcv dvc-s3 4. MACHINE LEARNING MODEL MODEL USED - YOLOV8l Benefits Low latency High accuracy than other object detection model MODEL USED - YOLOV8 Key features: YOLOv8 utilizes state-of-the-art CNN architectures in its backbone and neck for efficient feature extraction from images. This translates to better object detection accuracy. YOLOv8 predicts the object's center directly. This simplifies the detection process and potentially improves efficiency while maintaining accuracy. YOLOv8is designed to achieve a good balance between object detection accuracy and processing speed. This makes it suitable for real-time applications where fast performance is crucial. Metrices mAP mAP50 mAP75 mAPs FLOPS(FloatingPointOperations) 5. MODEL ARCHITECTURE Figure 1 — image adapted from: https://blog.roboflow.com/whats-new-in-yolov8 6. SYSTEM DEVELOPMENT PLAN I. The Object Detection Dataset to Train YOLOV8l (pretrained Roboflow-100 and Coco dataset) II. Train YOLOV8l on Custom Dataset III. Fine Tuning YOLOV8l Models IV. Inference on Test Images V. YOLOV8l Trained Model for Video 7. PIPELINE 7.1 DATA COLLECTION - DATASET KITTI vision benchmark suite It is a popular dataset for autonomous vehicle research tasks, especially in obstacle detection. It provides a variety of sensor data collected from a vehicle driving through various urban environments in Karlsruhe, Germany. Class Labels: "Car", "Van", "Truck", "Pedestrian", "Cyclist", "Tram", "Misc", "DontCare" 7.2 DATA TO BE TRAINED Images (front cam view of car) Labels (classes) Label Metric Notations: 7.3 DATA SPLIT UP Datafolder --->Training ---> Images ---> Labels >Testing ---> Images ---> Labels > Validation ---> Images ---> Labels 7.4 EVALUATION METRICS mAP(mean Average Precision) Ground truth measurements - Inferences Bounding boxes Tracking Confidence Score Classification (Multiclass models-based scenarios) Annotation of the objects (identify the object) OBSTACLE DETECTION CALIBRATION Confidence threshold - confidence score level 0 to 1 Higher than or equal - scored and labeled/classified to that class Below - get discarded 7.5 LOSS FUNCTIONS: Cross Entropy loss Intersection over union (IoU) loss 7.6 OPTIMIZATIONS: Pruning and Quantization Distillated model - pre trained Models Post-processing of YOLO-V8l detections: Apply techniques like non-max suppression (NMS) to refine the bounding boxes generated by YOLOV8l. (help to reduce reduntant detection) Detection with tracking - SORT algorithm 8. DOCUMENTATION REFERENCES https://yolov8.org/
###Human:tell me about Obstacle Detection ###assistant: EXECUTIVE SUMMARY An object which causes disruption and chaos for the traffic and interfere in the flow. Provide in-vehicle warnings about upcoming obstacles for safer navigation. Examples: Road construction (barriers, block stones, block cones) Pedestrians interfere Pets interfere Vehicles interfere 2. OVERVIEW USE CASES - Autonomous Vehicle PROBLEM TO SOLVE To avoid collision and congestion int the traffic Precautions to stop before. SYSTEM INPUT Sensory inputs - camera SYSTEM OUTCOME Obstacle Detection (Identification and Localization) Tracking 3. PRE-REQUISITIES - PACKAGES Python 3.10 OpenCV Py Torch dcv dvc-s3 4. MACHINE LEARNING MODEL MODEL USED - YOLOV8l Benefits Low latency High accuracy than other object detection model MODEL USED - YOLOV8 Key features: YOLOv8 utilizes state-of-the-art CNN architectures in its backbone and neck for efficient feature extraction from images. This translates to better object detection accuracy. YOLOv8 predicts the object's center directly. This simplifies the detection process and potentially improves efficiency while maintaining accuracy. YOLOv8is designed to achieve a good balance between object detection accuracy and processing speed. This makes it suitable for real-time applications where fast performance is crucial. Metrices mAP mAP50 mAP75 mAPs FLOPS(FloatingPointOperations) 5. MODEL ARCHITECTURE Figure 1 — image adapted from: https://blog.roboflow.com/whats-new-in-yolov8 6. SYSTEM DEVELOPMENT PLAN I. The Object Detection Dataset to Train YOLOV8l (pretrained Roboflow-100 and Coco dataset) II. Train YOLOV8l on Custom Dataset III. Fine Tuning YOLOV8l Models IV. Inference on Test Images V. YOLOV8l Trained Model for Video 7. PIPELINE 7.1 DATA COLLECTION - DATASET KITTI vision benchmark suite It is a popular dataset for autonomous vehicle research tasks, especially in obstacle detection. It provides a variety of sensor data collected from a vehicle driving through various urban environments in Karlsruhe, Germany. Class Labels: "Car", "Van", "Truck", "Pedestrian", "Cyclist", "Tram", "Misc", "DontCare" 7.2 DATA TO BE TRAINED Images (front cam view of car) Labels (classes) Label Metric Notations: 7.3 DATA SPLIT UP Datafolder --->Training ---> Images ---> Labels >Testing ---> Images ---> Labels > Validation ---> Images ---> Labels 7.4 EVALUATION METRICS mAP(mean Average Precision) Ground truth measurements - Inferences Bounding boxes Tracking Confidence Score Classification (Multiclass models-based scenarios) Annotation of the objects (identify the object) OBSTACLE DETECTION CALIBRATION Confidence threshold - confidence score level 0 to 1 Higher than or equal - scored and labeled/classified to that class Below - get discarded 7.5 LOSS FUNCTIONS: Cross Entropy loss Intersection over union (IoU) loss 7.6 OPTIMIZATIONS: Pruning and Quantization Distillated model - pre trained Models Post-processing of YOLO-V8l detections: Apply techniques like non-max suppression (NMS) to refine the bounding boxes generated by YOLOV8l. (help to reduce reduntant detection) Detection with tracking - SORT algorithm 8. DOCUMENTATION REFERENCES https://yolov8.org/
Face Matching System
Use case Description: Performs facial recognition by detecting faces, extracting features, and identifying individuals. FaceNet512 is an Image Recognition model and has cascade classifier are used to perform face matching system. Model Used: FaceNet512 Hardware Support: Raspberry PI PI camera Base Model GitHub Link: GitLab Link to Repository and steps to Train: http://gitlab.embedur.local/cloud/tinyml/-/tree/face-matching Backbone Research Paper: Dataset Link: Optimizations Done: Optimized Data Preprocessing: Mean subtraction and scaling aligns input data with training data for better model performance. Data Augmentation for Robustness: Variations in brightness, rotation, and size help the model handle real-world image variations. Training Focused on Generalization: Early stopping prevents overfitting, improving performance on unseen data. Learning rate scheduler allows faster convergence and potentially better generalization. Transfer Learning for Efficiency: Leverages pre-trained FaceNet512 for a powerful base. Fine-tuned Model Architecture: Freezing initial layers prevents overfitting by keeping pre-trained knowledge. Global average pooling captures global features for improved accuracy. Dense layer with class-specific neurons tailors the model to your specific classes (people). Download Formats: Tflite Input and Output Images: Input Images: Output Images: JSON File Format:**
###Human:tell me about Face Matching System ###assistant: Use case Description: Performs facial recognition by detecting faces, extracting features, and identifying individuals. FaceNet512 is an Image Recognition model and has cascade classifier are used to perform face matching system. Model Used: FaceNet512 Hardware Support: Raspberry PI PI camera Base Model GitHub Link: GitLab Link to Repository and steps to Train: http://gitlab.embedur.local/cloud/tinyml/-/tree/face-matching Backbone Research Paper: Dataset Link: Optimizations Done: Optimized Data Preprocessing: Mean subtraction and scaling aligns input data with training data for better model performance. Data Augmentation for Robustness: Variations in brightness, rotation, and size help the model handle real-world image variations. Training Focused on Generalization: Early stopping prevents overfitting, improving performance on unseen data. Learning rate scheduler allows faster convergence and potentially better generalization. Transfer Learning for Efficiency: Leverages pre-trained FaceNet512 for a powerful base. Fine-tuned Model Architecture: Freezing initial layers prevents overfitting by keeping pre-trained knowledge. Global average pooling captures global features for improved accuracy. Dense layer with class-specific neurons tailors the model to your specific classes (people). Download Formats: Tflite Input and Output Images: Input Images: Output Images: JSON File Format:**
Infrastructure Defect Detection System
Use case Description: The Infrastructure Defect Detection System (IDDS) is a comprehensive solution designed to detect, monitor, and report defects or anomalies in various types of infrastructures. The Inception-v4 model is an advanced convolutional neural network architecture optimized for image classification tasks, featuring inception modules, factorization techniques, residual connections, and auxiliary classifiers for improved performance and efficiency. Model Used: Inception V4 Model Hardware Support: Raspberry PI Base Model GitHub Link: GitLab Link to Repository and steps to Train: http://gitlab.embedur.local/cloud/tinyml/-/tree/infrastructure-defect-detection Backbone Research Paper: Dataset Link: https://image-net.org/challenges/LSVRC/index.php Optimizations Done: Optimized Data Preprocessing: Mean subtraction and scaling aligns input data with training data for better model performance. Data Augmentation for Robustness: Variations in brightness, rotation, and size help the model handle real-world image variations. Training Focused on Generalization: Early stopping prevents overfitting, improving performance on unseen data. Learning rate scheduler allows faster convergence and potentially better generalization. Transfer Learning for Efficiency: Leverages pre-trained Inception V4 Model for a powerful base. Fine-tuned Model Architecture: Freezing initial layers prevents overfitting and preserves pre-trained knowledge. Global average pooling captures comprehensive structural features for precise defect detection. Incorporating class-specific neurons tailors the model to identify distinct types of defects efficiently. Download Formats: Tflite Input and Output Images: Input Images: Output Images: JSON File Format:
###Human:tell me about Infrastructure Defect Detection System ###assistant: Use case Description: The Infrastructure Defect Detection System (IDDS) is a comprehensive solution designed to detect, monitor, and report defects or anomalies in various types of infrastructures. The Inception-v4 model is an advanced convolutional neural network architecture optimized for image classification tasks, featuring inception modules, factorization techniques, residual connections, and auxiliary classifiers for improved performance and efficiency. Model Used: Inception V4 Model Hardware Support: Raspberry PI Base Model GitHub Link: GitLab Link to Repository and steps to Train: http://gitlab.embedur.local/cloud/tinyml/-/tree/infrastructure-defect-detection Backbone Research Paper: Dataset Link: https://image-net.org/challenges/LSVRC/index.php Optimizations Done: Optimized Data Preprocessing: Mean subtraction and scaling aligns input data with training data for better model performance. Data Augmentation for Robustness: Variations in brightness, rotation, and size help the model handle real-world image variations. Training Focused on Generalization: Early stopping prevents overfitting, improving performance on unseen data. Learning rate scheduler allows faster convergence and potentially better generalization. Transfer Learning for Efficiency: Leverages pre-trained Inception V4 Model for a powerful base. Fine-tuned Model Architecture: Freezing initial layers prevents overfitting and preserves pre-trained knowledge. Global average pooling captures comprehensive structural features for precise defect detection. Incorporating class-specific neurons tailors the model to identify distinct types of defects efficiently. Download Formats: Tflite Input and Output Images: Input Images: Output Images: JSON File Format:
Transfer Learning on YOLO
To train
###Human:tell me about Transfer Learning on YOLO ###assistant: To train
Guide to Hardening an Ubuntu Server - an attempt to prevent future breaches
Package Management Uninstall Unnecessary Packages: Remove any packages that are not required for the server's functionality. Vulnerability Scanning: Implement a vulnerability scanner to identify packages with known vulnerabilities. Update vulnerable packages individually to avoid unnecessary updates and potential downtime. 2. Secure SSH Use ED25519 Keys: Generate and use ED25519 keys instead of RSA keys for improved security. Disable Root Login: Set in . Disable Password Authentication: Set in . Fail2Ban: Install and configure fail2ban to prevent brute-force attacks. 3. Firewall Configuration External Firewall: Ensure that the firewall is properly configured outside of the servers, either through a dedicated firewall instance or by utilizing the security group settings in AWS. Docker Considerations: Be cautious when implementing local firewalls as Docker uses iptables for its own traffic management. Coordinate with the team to avoid potential conflicts. 4. Disable Unnecessary Services and Kernel Modules Identify and Disable Services: Use to identify and disable unnecessary services. Disable Unnecessary Kernel Modules: Identify and disable unnecessary kernel modules. 5. User Management Consistent User Logins: Maintain consistent user logins across the servers to simplify management. Certificate Authority-based Authentication: Explore the implementation of Certificate Authority-based authentication using opensource tools. Discuss the feasibility and benefits with the team. 6. Enable SELinux or AppArmor SELinux: Enable and configure SELinux. AppArmor: Ensure AppArmor is enabled and enforce policies. 7. Logging and Monitoring Centralized Logging: Investigate centralized logging stacks such as the ELK stack (Elasticsearch, Logstash, Kibana) or other technologies that the team has experience with. Log Analysis: Utilize the centralized logging solution for log analysis and monitoring. 8. Implement Intrusion Detection and Prevention IDS/IPS: Deploy IDS/IPS systems like Snort or Suricata. 9. Secure Network Services HTTPS Configuration: Review and ensure proper HTTPS configuration for the specific HTTP server being used (e.g., Nginx, Apache). Service Communication: Audit the communication between services to identify any areas where security can be enhanced. 10. Implement File and Directory Permissions Least Privilege: Set appropriate file and directory permissions. 11. Data Encryption Granular Encryption: Continue encrypting sensitive data at a more granular level rather than relying on full disk encryption. 12. Perform Regular Security Audits Security Audits: Conduct regular security audits using tools like Lynis, OpenVAS, or Nessus. 13. Implement Kernel Hardening Kernel Security Features: Enable kernel security features like Address Space Layout Randomization (ASLR) and Execute Disable (NX) bit. Hardened Kernel: Consider using a hardened kernel, such as the grsecurity kernel patch or the Linux Kernel Runtime Guard (LKRG). 14. Secure Protocols Protocol Audit: Conduct an audit of the protocols used for communication between services. Identify any insecure protocols and plan for their replacement. 15. Network Segmentation Traffic Segregation: Develop a plan to segregate traffic between servers based on their roles and security requirements. Access Control: Implement access control measures to restrict traffic between network segments. 16. Secure DNS Configuration Trusted DNS Servers: Configure your server to use trusted DNS servers, preferably over encrypted protocols like DNS over HTTPS (DoH) or DNS over TLS (DoT). DNSSEC Validation: Implement DNSSEC validation to ensure the integrity and authenticity of DNS responses. 17. Restrict Cron and At Jobs Limit Access: Limit access to the and utilities to authorized users only. Monitor Jobs: Regularly review and monitor cron jobs and at jobs for suspicious or unauthorized entries. 18. Monitor System Integrity Integrity Monitoring: Use tools like Tripwire or AIDE (Advanced Intrusion Detection Environment) to monitor the integrity of critical system files and detect unauthorized changes. Integrity Checks: Regularly run integrity checks and compare the results against a known-good baseline. 19. Educate and Train Users Security Awareness Training: Provide regular training to users on best security practices. 20. Conduct Penetration Testing Regular Penetration Testing: Perform regular penetration testing to identify vulnerabilities and weaknesses in your server's security posture. Security Assessments: Engage with experienced security professionals or use automated tools like Metasploit or Nmap for thorough assessments. Regular Review and Updates Stay Informed: Regularly review and update hardening measures to keep up with emerging threats and best practices. Test changes in a non-production environment before applying them to production servers to avoid unintended disruptions. Next Steps Prioritize the implementation of the hardening measures based on the team's feedback and the organization's security requirements. * Regularly review and update the hardening guide to keep up with emerging threats and best practices.
###Human:tell me about Guide to Hardening an Ubuntu Server - an attempt to prevent future breaches ###assistant: Package Management Uninstall Unnecessary Packages: Remove any packages that are not required for the server's functionality. Vulnerability Scanning: Implement a vulnerability scanner to identify packages with known vulnerabilities. Update vulnerable packages individually to avoid unnecessary updates and potential downtime. 2. Secure SSH Use ED25519 Keys: Generate and use ED25519 keys instead of RSA keys for improved security. Disable Root Login: Set in . Disable Password Authentication: Set in . Fail2Ban: Install and configure fail2ban to prevent brute-force attacks. 3. Firewall Configuration External Firewall: Ensure that the firewall is properly configured outside of the servers, either through a dedicated firewall instance or by utilizing the security group settings in AWS. Docker Considerations: Be cautious when implementing local firewalls as Docker uses iptables for its own traffic management. Coordinate with the team to avoid potential conflicts. 4. Disable Unnecessary Services and Kernel Modules Identify and Disable Services: Use to identify and disable unnecessary services. Disable Unnecessary Kernel Modules: Identify and disable unnecessary kernel modules. 5. User Management Consistent User Logins: Maintain consistent user logins across the servers to simplify management. Certificate Authority-based Authentication: Explore the implementation of Certificate Authority-based authentication using opensource tools. Discuss the feasibility and benefits with the team. 6. Enable SELinux or AppArmor SELinux: Enable and configure SELinux. AppArmor: Ensure AppArmor is enabled and enforce policies. 7. Logging and Monitoring Centralized Logging: Investigate centralized logging stacks such as the ELK stack (Elasticsearch, Logstash, Kibana) or other technologies that the team has experience with. Log Analysis: Utilize the centralized logging solution for log analysis and monitoring. 8. Implement Intrusion Detection and Prevention IDS/IPS: Deploy IDS/IPS systems like Snort or Suricata. 9. Secure Network Services HTTPS Configuration: Review and ensure proper HTTPS configuration for the specific HTTP server being used (e.g., Nginx, Apache). Service Communication: Audit the communication between services to identify any areas where security can be enhanced. 10. Implement File and Directory Permissions Least Privilege: Set appropriate file and directory permissions. 11. Data Encryption Granular Encryption: Continue encrypting sensitive data at a more granular level rather than relying on full disk encryption. 12. Perform Regular Security Audits Security Audits: Conduct regular security audits using tools like Lynis, OpenVAS, or Nessus. 13. Implement Kernel Hardening Kernel Security Features: Enable kernel security features like Address Space Layout Randomization (ASLR) and Execute Disable (NX) bit. Hardened Kernel: Consider using a hardened kernel, such as the grsecurity kernel patch or the Linux Kernel Runtime Guard (LKRG). 14. Secure Protocols Protocol Audit: Conduct an audit of the protocols used for communication between services. Identify any insecure protocols and plan for their replacement. 15. Network Segmentation Traffic Segregation: Develop a plan to segregate traffic between servers based on their roles and security requirements. Access Control: Implement access control measures to restrict traffic between network segments. 16. Secure DNS Configuration Trusted DNS Servers: Configure your server to use trusted DNS servers, preferably over encrypted protocols like DNS over HTTPS (DoH) or DNS over TLS (DoT). DNSSEC Validation: Implement DNSSEC validation to ensure the integrity and authenticity of DNS responses. 17. Restrict Cron and At Jobs Limit Access: Limit access to the and utilities to authorized users only. Monitor Jobs: Regularly review and monitor cron jobs and at jobs for suspicious or unauthorized entries. 18. Monitor System Integrity Integrity Monitoring: Use tools like Tripwire or AIDE (Advanced Intrusion Detection Environment) to monitor the integrity of critical system files and detect unauthorized changes. Integrity Checks: Regularly run integrity checks and compare the results against a known-good baseline. 19. Educate and Train Users Security Awareness Training: Provide regular training to users on best security practices. 20. Conduct Penetration Testing Regular Penetration Testing: Perform regular penetration testing to identify vulnerabilities and weaknesses in your server's security posture. Security Assessments: Engage with experienced security professionals or use automated tools like Metasploit or Nmap for thorough assessments. Regular Review and Updates Stay Informed: Regularly review and update hardening measures to keep up with emerging threats and best practices. Test changes in a non-production environment before applying them to production servers to avoid unintended disruptions. Next Steps Prioritize the implementation of the hardening measures based on the team's feedback and the organization's security requirements. * Regularly review and update the hardening guide to keep up with emerging threats and best practices.
Models Progress
Public-Site General Models Progress Week (Dates) Planned Completed In Progress 1 (7th June) 20 20 0 2(14th June) 20 20 0 Public-Site Usecase Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 2 2 0 2(14th June) 3 2 1 Synaptics General Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 20 20 0 2(14th June) 20 20 0 Synaptics Usecase Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 2 2 0 2(14th June) 3 0 3 Public General Models Completed ResNet 101 Mobile Net V3 Small ShuffleNet SqueezeNet AlexNet ResNet 50 Mediapipe -Pose Mediapipe -Segmentation Inception V3 MNasNet YoloV8 - Seg YoloV8 - Detection YoloV8 - Pose YoloV5 - Detection FastSAM YoloV8 - Face WakeWord Detection YolACT MobileNet V3 Large DenseNet-121 EfficientNet B3 RegNetX 800mf FCN ResNet 101 Efficient Net B0 GoogleNet ResNext 50 ResNext 101 RegNetX 400mf MobileNet V2 DeeplabV3 MobileNet V3 Efficient Net V2 Small Efficient Net B7 MNas Net 1_3 SimpleNet V1_5m_m2 Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu Regnet_y_2_3gf Public General Models Planned (Next update) Simplenetv1_9m_m2 Proxyless_mobile Proxyless_gpu Public Usecase Models Completed Driver Distraction System Face Matching System Infrastructure Defect Detection System YolACT on 480 x 480 Public Usecase Models Planned Obstacle Detection Synaptics General Models Completed (With NPU metrics) ResNet 101 Mobile Net V3 Small ShuffleNet SqueezeNet AlexNet ResNet 50 Mediapipe -Pose Mediapipe -Segmentation Inception V3 MNasNet YoloV8 - Seg YoloV8 - Detection YoloV8 - Pose YoloV5 - Detection FastSAM YoloV8 - Face WakeWord Detection YolACT MobileNet V3 Large DenseNet-121 EfficientNet B3 RegNetX 800mf FCN ResNet 101 Efficient Net B0 GoogleNet ResNext 50 ResNext 101 RegNetX 400mf MobileNet V2 DeeplabV3 MobileNet V3 Efficient Net V2 Small Efficient Net B7 MNas Net 1_3 SimpleNet V1_5m_m2 Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu Regnet_y_2_3gf *
###Human:tell me about Models Progress ###assistant: Public-Site General Models Progress Week (Dates) Planned Completed In Progress 1 (7th June) 20 20 0 2(14th June) 20 20 0 Public-Site Usecase Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 2 2 0 2(14th June) 3 2 1 Synaptics General Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 20 20 0 2(14th June) 20 20 0 Synaptics Usecase Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 2 2 0 2(14th June) 3 0 3 Public General Models Completed ResNet 101 Mobile Net V3 Small ShuffleNet SqueezeNet AlexNet ResNet 50 Mediapipe -Pose Mediapipe -Segmentation Inception V3 MNasNet YoloV8 - Seg YoloV8 - Detection YoloV8 - Pose YoloV5 - Detection FastSAM YoloV8 - Face WakeWord Detection YolACT MobileNet V3 Large DenseNet-121 EfficientNet B3 RegNetX 800mf FCN ResNet 101 Efficient Net B0 GoogleNet ResNext 50 ResNext 101 RegNetX 400mf MobileNet V2 DeeplabV3 MobileNet V3 Efficient Net V2 Small Efficient Net B7 MNas Net 1_3 SimpleNet V1_5m_m2 Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu Regnet_y_2_3gf Public General Models Planned (Next update) Simplenetv1_9m_m2 Proxyless_mobile Proxyless_gpu Public Usecase Models Completed Driver Distraction System Face Matching System Infrastructure Defect Detection System YolACT on 480 x 480 Public Usecase Models Planned Obstacle Detection Synaptics General Models Completed (With NPU metrics) ResNet 101 Mobile Net V3 Small ShuffleNet SqueezeNet AlexNet ResNet 50 Mediapipe -Pose Mediapipe -Segmentation Inception V3 MNasNet YoloV8 - Seg YoloV8 - Detection YoloV8 - Pose YoloV5 - Detection FastSAM YoloV8 - Face WakeWord Detection YolACT MobileNet V3 Large DenseNet-121 EfficientNet B3 RegNetX 800mf FCN ResNet 101 Efficient Net B0 GoogleNet ResNext 50 ResNext 101 RegNetX 400mf MobileNet V2 DeeplabV3 MobileNet V3 Efficient Net V2 Small Efficient Net B7 MNas Net 1_3 SimpleNet V1_5m_m2 Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu Regnet_y_2_3gf *
Weekly Status Report -13th June
Executive Summary: Model Zoo: Worked on completing the Login components with the token authentication for the Synaptics model zoo. Worked on adding the model details page with the model download and the instructions to use the model. Working on bringing up the About Us and Contact Us pages for the site. Collected metrics for the 40 general models and 3 usecase models for the public site and working on adding the additional usecase models. Working on collecting the next set of models for the Synaptics model zoo and the usecase models. Accomplishments: Dev: AI-390: Curation of model details for each of the selected models - 4 models - ==Closed.== AI-427: Curation of model details for each of the selected models -5 models- ==Closed.== AI-428: Collection of model metrics for the set of selected 5 models including RPI – ==Closed.== AI- 485: Clean up DB to remove unwanted details – ==Closed.== AI-462: Set full names while displaying platform and download formats – ==Closed.== AI-430: Collection of model metrics for Synaptics next 5 models. – ==Closed.== AI-452: Benchmark metrics for SL1680 – ==Closed== AI-432: Metrics Collection of Face matching use case for SL1680 – ==Closed.== AI-407: Metrics Collection of Driver Distraction use case for SL1680 – ==Closed.== AI-355:UI for user login page with unit testing – ==Closed.== AI-360: Backend APIs for User Login unit testing – ==Closed.== AI-437: Collection of model metrics for the set of selected next 5 models including RPI –==Closed.== AI-365: APIs for model details, metrics and download options – ==Closed.== AI-363: UI for model details, metrics and download options – ==Closed.== AI-433: Curation of model details for each of the selected models - next 5 models – ==Closed.== AI-436: Defect Detection System Training – ==Closed.== AI-438: Metrics collection of use case - RPI – ==Closed.== AI-490: Collection of metrics for the Synaptics 5 models - ==In Progress.== AI-442: Training of the Obstacle Detection usecase – ==In Progress.== UI Changes for Authentication – ==In Progress.== AI-368: UI for Other page– ==In Progress== AI-369: APIs for Other pages– ==In Progress== DevOps: AI-381: Deployment pipeline setup of public model zoo site - Prod – ==Closed.== AI-382: Planning of deployment of model zoos for synaptics - Prod – ==Closed.== AI-459: Postgres DB backup for model zoo servers - ==Closed.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Working on enhancements to the model details page . Bug fixes on the UI and UX front for the Synaptics and Public sites. Setting up responsiveness for mobile sites. Release of the protected Login enabled Beta site. Setting up of the instances and deployment plan for the Public and Synaptics sites PTO and Vacation: NIL DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-381: Deployment pipeline setup of public model zoo site - Prod – ==Closed.== Completed the addition of beta and prod stages in pipeline for both frontend and backend. whenever code is merged to respective branch pipeline will be triggered for that environment. Added documentation for the same. [JENANI SAMPATH] AI-382: Planning of deployment of model zoos for synaptics - Prod – ==Closed.== Completed the pipeline for both frontend and backend for synaptics. whenever code is merged to respective branch pipeline will be triggered for that environment. Added documentation for the same. [JENANI SAMPATH] AI-390: Curation of model details for each of the selected models - 4 models - ==Closed.== Collected model metrics for 4 models : 1. Yolov8face 2. wake_word tflite 3. yolact 4. Mobilenetv3 large [JENANI SAMPATH] AI-427: Curation of model details for each of the selected models -5 models- ==Closed.== Collected model metrics for 5 models 1. Resnext 101 2. Regnet400mf 3. Mobilenet v2 4. goolenet 5. resnext50 [JENANI SAMPATH] AI-459: Postgres DB backup for model zoo servers - ==Closed.== Attached s3 access IAM role to EC2 instances. Added DB backup script to cron-job which is triggered everyday at 10 AM IST. Backup dump of postgres is pushed to s3 bucket. [SURAJRAM SRINIVASAN] AI-428: Collection of model metrics for the set of selected 5 models including RPI – ==Closed.== Collected the model metrics for the next set of 5 models and pushed them to the DB. Completed loading the 40 general models into the DB for the public site. [SURAJRAM SRINIVASAN] AI- 485: Clean up DB to remove unwanted details – ==Closed.== Removed the unwated categories, domains and other unrelated values from the DB. Cleaned up the DB and fixed the missing values. [SURAJRAM SRINIVASAN] AI-462: Set full names while displaying platform and download formats – ==Closed.== Modified the DB to set the full names instead of the alias names for the platforms and download formats. Verified the same and avoided short forms for the fields in the DB. [SURAJRAM SRINIVASAN] AI-430: Collection of model metrics for Synaptics next 5 models. – ==Closed.== Setup the pipeline for the NPU metrics in the SL1680. Collected the metrics using the SL1680 for the next set of 5 models and pushed the data to the DB. [ANEES ASLAM] AI-452: Benchmark metrics for SL1680 – ==Closed.== Benchmark_model tool updated for inference on NPU with SDK v1.0.0 (delegates) Integrated with SL1680 pipeline to perform Layer-wise metrics collection. [ANEES ASLAM] AI-432: Metrics Collection of Face matching use case for SL1680 – ==Closed.== Completed uploading model metrics of Resnet 512 based Facial features matching on SL1680. [ANEES ASLAM] AI-407: Metrics Collection of Driver Distraction use case for SL1680 – ==Closed.== Completed uploading model metrics of InceptionV3net based Driver distraction detection on SL1680. [KEERTHANA SUNDARBABU] AI-355:UI for user login page with unit testing – ==Closed.== Login UI has been integrated with the backend All the APIs are fixed with authentication headers Login expiration, session handling and logout functionalities are completed [JAYAPRAKASH MASILAMANI] AI-360: Backend APIs for User Login unit testing – ==Closed.== Integrated the login and logout API with the frontend nextJS application. Implemented the unit test for login and logout API using Jest. Deployed the Login and Logout APIs in the synaptics rest backend. [JAYAPRAKASH MASILAMANI] AI-437: Collection of model metrics for the set of selected next 5 models including RPI – ==Closed.== Collected the details of the below models and ran the model metrics pipleine: 1. FCN Resnet 101 2. EfficientNet B0 3. DeeplabV3 MobileNetV3 4. EfficientNetV2 Small 5. EfficientNet B7 [SWATHI RAJAMANI] AI-365: APIs for model details, metrics and download options – ==Closed.== Added API route for model details and metrics Made schema changes Added functionality support for usecase and details api [SWATHI RAJAMANI] AI-363: UI for model details, metrics and download options – ==Closed.== Updated Model details page UI based on latest. Integrate details API [SHAIK SHUAIB AFNAN] AI-433: Curation of model details for each of the selected models - next 5 models – ==Closed.== Completed collection of metrics for model zoo of the below models and pushed successfully Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu regnet_y_2_3 [SHAIK SHUAIB AFNAN] AI-436: Defect Detection System Training – ==Closed.== Made a Infrastructure defect detection system which is a image classification model built using inceptionv4 model. Built the inceptionv4 model from scratch as there is no api implementation for the model in tensorflow or pytorch. Performed integration with dvc and mlflow for storing dataset into s3 bucket and logged parameters & artifacts into mlflow. Made a readme file which has all contents about use case, model, data collection, training and prediction. [SHAIK SHUAIB AFNAN] AI-438: Metrics collection of use case - RPI – ==Closed.== Completed collection of metrics and pushing the inceptionv4 model for infrastructure defect detection model to the database. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-490: Collection of metrics for the Synaptics 5 models - ==In Progress.== Collecting metrics for sl1680 for the below models Regnet400mf Mobilenet v2ngoolenet resnext50 wakeword [ETA - 14/06 ] [SURAJRAM SRINIVASAN] AI-442: Training of the Obstacle Detection usecase – ==In Progress.== Started training the Obstacle detection system usecase using the KITTI dataset. Working on the optimizations for the training and generating the metrics for the usecase [ETA - 14/06 ] [JAYAPRAKASH MASILAMANI] UI Changes for Authentication – ==In Progress.== Created AppContext Wrapper for protected routing for every pages. Implemeted protected routing for base pages in Synaptics UI. [ETA - 17/06 ] [SANJAY RAVI] AI-368: UI for Other page– ==In Progress== Need to implement UI components for Other Page and contact us page with Proper Transitions [ETA - 14/06 ] [SANJAY RAVI] AI-369: APIs for Other pages– ==In Progress== Updating API for model zoo Other Page and contact Page [ETA - 14/06]**
###Human:tell me about Weekly Status Report -13th June ###assistant: Executive Summary: Model Zoo: Worked on completing the Login components with the token authentication for the Synaptics model zoo. Worked on adding the model details page with the model download and the instructions to use the model. Working on bringing up the About Us and Contact Us pages for the site. Collected metrics for the 40 general models and 3 usecase models for the public site and working on adding the additional usecase models. Working on collecting the next set of models for the Synaptics model zoo and the usecase models. Accomplishments: Dev: AI-390: Curation of model details for each of the selected models - 4 models - ==Closed.== AI-427: Curation of model details for each of the selected models -5 models- ==Closed.== AI-428: Collection of model metrics for the set of selected 5 models including RPI – ==Closed.== AI- 485: Clean up DB to remove unwanted details – ==Closed.== AI-462: Set full names while displaying platform and download formats – ==Closed.== AI-430: Collection of model metrics for Synaptics next 5 models. – ==Closed.== AI-452: Benchmark metrics for SL1680 – ==Closed== AI-432: Metrics Collection of Face matching use case for SL1680 – ==Closed.== AI-407: Metrics Collection of Driver Distraction use case for SL1680 – ==Closed.== AI-355:UI for user login page with unit testing – ==Closed.== AI-360: Backend APIs for User Login unit testing – ==Closed.== AI-437: Collection of model metrics for the set of selected next 5 models including RPI –==Closed.== AI-365: APIs for model details, metrics and download options – ==Closed.== AI-363: UI for model details, metrics and download options – ==Closed.== AI-433: Curation of model details for each of the selected models - next 5 models – ==Closed.== AI-436: Defect Detection System Training – ==Closed.== AI-438: Metrics collection of use case - RPI – ==Closed.== AI-490: Collection of metrics for the Synaptics 5 models - ==In Progress.== AI-442: Training of the Obstacle Detection usecase – ==In Progress.== UI Changes for Authentication – ==In Progress.== AI-368: UI for Other page– ==In Progress== AI-369: APIs for Other pages– ==In Progress== DevOps: AI-381: Deployment pipeline setup of public model zoo site - Prod – ==Closed.== AI-382: Planning of deployment of model zoos for synaptics - Prod – ==Closed.== AI-459: Postgres DB backup for model zoo servers - ==Closed.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Working on enhancements to the model details page . Bug fixes on the UI and UX front for the Synaptics and Public sites. Setting up responsiveness for mobile sites. Release of the protected Login enabled Beta site. Setting up of the instances and deployment plan for the Public and Synaptics sites PTO and Vacation: NIL DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-381: Deployment pipeline setup of public model zoo site - Prod – ==Closed.== Completed the addition of beta and prod stages in pipeline for both frontend and backend. whenever code is merged to respective branch pipeline will be triggered for that environment. Added documentation for the same. [JENANI SAMPATH] AI-382: Planning of deployment of model zoos for synaptics - Prod – ==Closed.== Completed the pipeline for both frontend and backend for synaptics. whenever code is merged to respective branch pipeline will be triggered for that environment. Added documentation for the same. [JENANI SAMPATH] AI-390: Curation of model details for each of the selected models - 4 models - ==Closed.== Collected model metrics for 4 models : 1. Yolov8face 2. wake_word tflite 3. yolact 4. Mobilenetv3 large [JENANI SAMPATH] AI-427: Curation of model details for each of the selected models -5 models- ==Closed.== Collected model metrics for 5 models 1. Resnext 101 2. Regnet400mf 3. Mobilenet v2 4. goolenet 5. resnext50 [JENANI SAMPATH] AI-459: Postgres DB backup for model zoo servers - ==Closed.== Attached s3 access IAM role to EC2 instances. Added DB backup script to cron-job which is triggered everyday at 10 AM IST. Backup dump of postgres is pushed to s3 bucket. [SURAJRAM SRINIVASAN] AI-428: Collection of model metrics for the set of selected 5 models including RPI – ==Closed.== Collected the model metrics for the next set of 5 models and pushed them to the DB. Completed loading the 40 general models into the DB for the public site. [SURAJRAM SRINIVASAN] AI- 485: Clean up DB to remove unwanted details – ==Closed.== Removed the unwated categories, domains and other unrelated values from the DB. Cleaned up the DB and fixed the missing values. [SURAJRAM SRINIVASAN] AI-462: Set full names while displaying platform and download formats – ==Closed.== Modified the DB to set the full names instead of the alias names for the platforms and download formats. Verified the same and avoided short forms for the fields in the DB. [SURAJRAM SRINIVASAN] AI-430: Collection of model metrics for Synaptics next 5 models. – ==Closed.== Setup the pipeline for the NPU metrics in the SL1680. Collected the metrics using the SL1680 for the next set of 5 models and pushed the data to the DB. [ANEES ASLAM] AI-452: Benchmark metrics for SL1680 – ==Closed.== Benchmark_model tool updated for inference on NPU with SDK v1.0.0 (delegates) Integrated with SL1680 pipeline to perform Layer-wise metrics collection. [ANEES ASLAM] AI-432: Metrics Collection of Face matching use case for SL1680 – ==Closed.== Completed uploading model metrics of Resnet 512 based Facial features matching on SL1680. [ANEES ASLAM] AI-407: Metrics Collection of Driver Distraction use case for SL1680 – ==Closed.== Completed uploading model metrics of InceptionV3net based Driver distraction detection on SL1680. [KEERTHANA SUNDARBABU] AI-355:UI for user login page with unit testing – ==Closed.== Login UI has been integrated with the backend All the APIs are fixed with authentication headers Login expiration, session handling and logout functionalities are completed [JAYAPRAKASH MASILAMANI] AI-360: Backend APIs for User Login unit testing – ==Closed.== Integrated the login and logout API with the frontend nextJS application. Implemented the unit test for login and logout API using Jest. Deployed the Login and Logout APIs in the synaptics rest backend. [JAYAPRAKASH MASILAMANI] AI-437: Collection of model metrics for the set of selected next 5 models including RPI – ==Closed.== Collected the details of the below models and ran the model metrics pipleine: 1. FCN Resnet 101 2. EfficientNet B0 3. DeeplabV3 MobileNetV3 4. EfficientNetV2 Small 5. EfficientNet B7 [SWATHI RAJAMANI] AI-365: APIs for model details, metrics and download options – ==Closed.== Added API route for model details and metrics Made schema changes Added functionality support for usecase and details api [SWATHI RAJAMANI] AI-363: UI for model details, metrics and download options – ==Closed.== Updated Model details page UI based on latest. Integrate details API [SHAIK SHUAIB AFNAN] AI-433: Curation of model details for each of the selected models - next 5 models – ==Closed.== Completed collection of metrics for model zoo of the below models and pushed successfully Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu regnet_y_2_3 [SHAIK SHUAIB AFNAN] AI-436: Defect Detection System Training – ==Closed.== Made a Infrastructure defect detection system which is a image classification model built using inceptionv4 model. Built the inceptionv4 model from scratch as there is no api implementation for the model in tensorflow or pytorch. Performed integration with dvc and mlflow for storing dataset into s3 bucket and logged parameters & artifacts into mlflow. Made a readme file which has all contents about use case, model, data collection, training and prediction. [SHAIK SHUAIB AFNAN] AI-438: Metrics collection of use case - RPI – ==Closed.== Completed collection of metrics and pushing the inceptionv4 model for infrastructure defect detection model to the database. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-490: Collection of metrics for the Synaptics 5 models - ==In Progress.== Collecting metrics for sl1680 for the below models Regnet400mf Mobilenet v2ngoolenet resnext50 wakeword [ETA - 14/06 ] [SURAJRAM SRINIVASAN] AI-442: Training of the Obstacle Detection usecase – ==In Progress.== Started training the Obstacle detection system usecase using the KITTI dataset. Working on the optimizations for the training and generating the metrics for the usecase [ETA - 14/06 ] [JAYAPRAKASH MASILAMANI] UI Changes for Authentication – ==In Progress.== Created AppContext Wrapper for protected routing for every pages. Implemeted protected routing for base pages in Synaptics UI. [ETA - 17/06 ] [SANJAY RAVI] AI-368: UI for Other page– ==In Progress== Need to implement UI components for Other Page and contact us page with Proper Transitions [ETA - 14/06 ] [SANJAY RAVI] AI-369: APIs for Other pages– ==In Progress== Updating API for model zoo Other Page and contact Page [ETA - 14/06]**
Direct API
The new direct API allows for sending commands to a device through the MQTT topic "device//data_req". nThe agent can access the available data in memory and respond through the topic "device//data_resp." nThe messages received on the "data_req" topic can be of type GET or SET with correlation id. The corresponding "data_resp" messages can consist of runtime data/response of the device with correlation id. nTherefore, the agent can be adjusted to store the most recent runtime data in memory. Ideas is that we can modify the agent to keep a latest runtime data snapshot ready (periodic poll and store in a json in-memory) that can be queried, and a response can be put on a response topic by querying the runtime data snapshot that is available. Agent Direct API is supported only by the agents with upgraded with this feature. For legacy agents, we would be responding with “Not Supported API“. This is achieved by a feature flag received from the device The direct API should be validated at the rest service based on the flag before initiating the request to agent Schema: Below is the schema changes to be added to commonfeatures in accesspoint runtime schema. Sample request Response for GET: Sample request Response for POST: Status Code Message 200 Channel Changed successfully 206 Channel Changed for either radio_24 or radio_5 204 No action performed 500 Error occurred while processing the request Cloud Workflow: The new Go service, called Direct req service, will utilize emqx to send an Mqtt request to the device. In response, the device will provide runtime data through an Mqtt response topic. Bridge will be listening the Mqtt response topic of emqx, and upon receiving a message, it will handle it just like other runtime data and publish it to the devices.opt_direct data topic in kafka. The New Go service, called Direct resp service, will subscribe to the kafka topic devices.opt_direct data. It will process the message in a similar way to the json inserter for runtime data and save it in cassandra. Additionally, the service will store the same data in the redis cache with a configurable time-to-live (TTL). For each API call made to retrieve the runtime attribute data, the direct API service will initially search for the device and its corresponding request attribute values in redis. If the value is found in redis, the service will provide a response. However, if the value is not found, the service will then search in cassandra and initiate a direct request (mqtt request) to the device to obtain the updated or new data. It will then wait for a specified time period. If the request is not processed within this time interval, the service will return a timeout. The request and response will be associated with a unique session ID for correlation. The latest version of the get API includes a feature that allows it to determine whether it should retrieve the most recent data from Cassandra or request the latest data directly from the device. High-level architecture. New topics - runtime.directv3 and devices.directRuntime Here's a brief description of the components: New Go Service (Direct Req Service): EMQX MQTT Broker: Bridge: Kafka: New Go Service (Direct Resp Service): API Gateway (APISix): Clients (API Consumers): n==Need to include more details about the data formats, error handling, and scalability considerations.== Sample Schema accesspoint.json 7555 radio_5.json 27642 radio_24.json 17404 wlan.json 4374 wifi_client.json 5436 eth_client.json 1280 Sample requests and response Accesspoint Client Radio 2.4 Radio 5 Data Model Tables used in Cassandra Direct Look-up table(config.direct_lookup) Lookup table for direct v1 session id using type, id and timestamp Column type type (Partition key) text id (Partition key) text timestamp (Clustering key) int session_id text status text Redis Key: -type Value: Map Kafka/EMQTT Topic Reference Diagram
###Human:tell me about Direct API ###assistant: The new direct API allows for sending commands to a device through the MQTT topic "device//data_req". nThe agent can access the available data in memory and respond through the topic "device//data_resp." nThe messages received on the "data_req" topic can be of type GET or SET with correlation id. The corresponding "data_resp" messages can consist of runtime data/response of the device with correlation id. nTherefore, the agent can be adjusted to store the most recent runtime data in memory. Ideas is that we can modify the agent to keep a latest runtime data snapshot ready (periodic poll and store in a json in-memory) that can be queried, and a response can be put on a response topic by querying the runtime data snapshot that is available. Agent Direct API is supported only by the agents with upgraded with this feature. For legacy agents, we would be responding with “Not Supported API“. This is achieved by a feature flag received from the device The direct API should be validated at the rest service based on the flag before initiating the request to agent Schema: Below is the schema changes to be added to commonfeatures in accesspoint runtime schema. Sample request Response for GET: Sample request Response for POST: Status Code Message 200 Channel Changed successfully 206 Channel Changed for either radio_24 or radio_5 204 No action performed 500 Error occurred while processing the request Cloud Workflow: The new Go service, called Direct req service, will utilize emqx to send an Mqtt request to the device. In response, the device will provide runtime data through an Mqtt response topic. Bridge will be listening the Mqtt response topic of emqx, and upon receiving a message, it will handle it just like other runtime data and publish it to the devices.opt_direct data topic in kafka. The New Go service, called Direct resp service, will subscribe to the kafka topic devices.opt_direct data. It will process the message in a similar way to the json inserter for runtime data and save it in cassandra. Additionally, the service will store the same data in the redis cache with a configurable time-to-live (TTL). For each API call made to retrieve the runtime attribute data, the direct API service will initially search for the device and its corresponding request attribute values in redis. If the value is found in redis, the service will provide a response. However, if the value is not found, the service will then search in cassandra and initiate a direct request (mqtt request) to the device to obtain the updated or new data. It will then wait for a specified time period. If the request is not processed within this time interval, the service will return a timeout. The request and response will be associated with a unique session ID for correlation. The latest version of the get API includes a feature that allows it to determine whether it should retrieve the most recent data from Cassandra or request the latest data directly from the device. High-level architecture. New topics - runtime.directv3 and devices.directRuntime Here's a brief description of the components: New Go Service (Direct Req Service): EMQX MQTT Broker: Bridge: Kafka: New Go Service (Direct Resp Service): API Gateway (APISix): Clients (API Consumers): n==Need to include more details about the data formats, error handling, and scalability considerations.== Sample Schema accesspoint.json 7555 radio_5.json 27642 radio_24.json 17404 wlan.json 4374 wifi_client.json 5436 eth_client.json 1280 Sample requests and response Accesspoint Client Radio 2.4 Radio 5 Data Model Tables used in Cassandra Direct Look-up table(config.direct_lookup) Lookup table for direct v1 session id using type, id and timestamp Column type type (Partition key) text id (Partition key) text timestamp (Clustering key) int session_id text status text Redis Key: -type Value: Map Kafka/EMQTT Topic Reference Diagram
Security Tasks
[ ] Penetration Testing - [x] Investigate AWS Policies for Penetration testing - [x] Develop Pen Test Plan for Astral Instance - [ ] Review the test plan internally - [ ] Get permission from AWS to proceed with test plan - [ ] Execute the pentest plan - [ ] Software Development Lifecyle - [x] Get access to GITLAB http://gitlab.embedur.local and create a login - [x] Start looking into NodeJS/JavaScript vulnerability scanning - [ ] Setup automated scanning for the Astral REST server - [ ] Setup automated scanning for the Astral WEB server - [x] Look into Golang vulnerability Scanning - [ ] Setup automated scanning for Golang projects
###Human:tell me about Security Tasks ###assistant: [ ] Penetration Testing - [x] Investigate AWS Policies for Penetration testing - [x] Develop Pen Test Plan for Astral Instance - [ ] Review the test plan internally - [ ] Get permission from AWS to proceed with test plan - [ ] Execute the pentest plan - [ ] Software Development Lifecyle - [x] Get access to GITLAB http://gitlab.embedur.local and create a login - [x] Start looking into NodeJS/JavaScript vulnerability scanning - [ ] Setup automated scanning for the Astral REST server - [ ] Setup automated scanning for the Astral WEB server - [x] Look into Golang vulnerability Scanning - [ ] Setup automated scanning for Golang projects
Status - Week ending on 07/06
*Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: The EMS staging server at ems-staging.int.embedur.com was not operational. After flushing the DNS cache and configuring the resolvers to use Google's nameservers, the issue was resolved. Internal Environments - QA/Dev: A new LLM model has been developed for faster responses, incorporating a caching mechanism to enhance speed. A new worker node has been added to the Airtel-QA3 server. The model's performance has been benchmarked, and the service is now ready for testing. Singtel Production: Created API spec document which contains list of basic API calls in VMware Singtel Staging environment. Also shared Postman collection JSON to customer to proceed with Penetration testing. Jeanette confirmed that NAT IP message consumption is fine but still they’re not able to consume for runtime streaming topics. Need to have a joint troubleshooting call with customer for debugging. Airtel Production: Currently the ORC processor is filling data of widgets in Summary Page and some widgets in Reports page. Since the HDFS path in production is complex, created an intermediatory files which will be the base for other spark jobs. Currently we are providing daily Self-Healing report which gives all the devices which crossed the configured threshold in rule engine for CPU / memory / temperature / coverage hole / Interference. The requirement from Airtel is to compare the number of occurrences with more than threshold for a device with the total number of samples in cloud. Airtel reported for some of the devices the reboot counts on TR and Embedur doesn’t match. So, for some of the devices we have collected the data at regular interval on both sides for further investigation.n *Tickets information: Closed Tickets Description DEVOPS-1163 EMS Caddy Malfunction DEVOPS-1164 Setup Raspberry Pi 4 AIR-3708 LLM based resolution AIR-3709 JSON query processing AIR-3715 Day wise access point / radio details with required fields AIR-3716 Change in Self-healing report by comparing total number of samples In-Progress Tickets Description ETA AIR-3647 Create a design document and update WIKI 10/06/2024 AIR-3707 Prompt Engine for Recommendation Service 14/06/2024 SMN-739 Kafka Message consumption from VMware Prod 14/06/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 14/06/2024 SMN-744 Penetration testing for VMware Production setup 14/06/2024 AIR-3717 Reboot count analysis 14/06/2024 PTO & Vacation: Plan for next week: Data Analytics The to-do list includes creating a Spark job for hourly statistics on Radio 2.4 and 5 data, retrieving data from HDFS, converting it to CSV and ORC formats, loading it into PySpark, transforming it into a DataFrame, and updating it in Cassandra.n Penetration testing and SFTP server connection establishment Penetration testing has to be done from Customer end, as we’ve shared the API Spec Document and postman JSON collection. Since DNAT, Firewall and routing rules has been added, SFTP server connection has to be established. Need to have a troubleshooting call with customer to resolve this issue. Reboot analysis As we got the reboot dump from the TR side as well, the reboot counts for those devices will be compared with the Embedur dump. *Detailed Summary: Recommendation Service A new LLM model has been developed for faster responses, incorporating a caching mechanism to enhance speed. A new worker node has been added to the Airtel-QA3 server. New prompts and a problem classification template have been developed. The model's performance has been benchmarked, and the service is now ready for testing. Caddy Malfunction The EMS staging server at ems-staging.int.embedur.com was not operational. After flushing the DNS cache and configuring the resolvers to use Google's nameservers, the issue was resolved. It appears the problem originated from the nameserver configuration. By removing and re-adding the nameserver records, we were able to restore proper functionality. The server is now accessible and working correctly. Raspberry Pi 4 The new Raspberry Pi 4 setup is complete with Ubuntu 22.04, including all essential packages and tools for our projects. SSH is configured, ensuring secure remote access. Python 3 and necessary libraries are installed, streamlining our development process. The system is now ready for immediate use and project deployment.n Singtel Penetration Testing and Kafka Message consumption Created API Spec document which contains list of basic API calls in VMware Singtel Staging environment. API Calls related to accesspoint, wlan, client, radio, group profile and push notification feature were added as part of API Spec. Also shared Postman collection JSON file to customer to proceed with Penetration testing. Customer confirmed that NAT IP message consumption is fine, but still need to have a debugging call with them, as runtime streaming topics aren’t consumed consistently. Spark - Intermediatory file All the necessary fields for the types - radio_5, radio_24, client and accesspoint points has been included and the data for those values has been generated. A scheduler has been placed every hour to generate the intermediatory files. So, for every hour a single orc file will be created for every type - accesspoint, radio_5 etc. Self-Healing Summary The entire logic has been written in Py spark using the intermediate hourly file and the output will contains the devices if its occurrence percentage is greater than or equal to 50%. Tested the report generation for Airtel using the intermediatory files and it is working as expected. Automated the process as well, so the problematic devices table will be sent to Airtel as a part of daily self-healing summary mail. Reboot analysis Using spark fetched the reboot count data from ORC files for the conflicting devices provided by Airtel. * We have the data for the day - 05/06/2024 and the TR data has also been provided by Airtel. Need to verify the reboot counts for all the time intervals between these two files.
###Human:tell me about Status - Week ending on 07/06 ###assistant: *Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: The EMS staging server at ems-staging.int.embedur.com was not operational. After flushing the DNS cache and configuring the resolvers to use Google's nameservers, the issue was resolved. Internal Environments - QA/Dev: A new LLM model has been developed for faster responses, incorporating a caching mechanism to enhance speed. A new worker node has been added to the Airtel-QA3 server. The model's performance has been benchmarked, and the service is now ready for testing. Singtel Production: Created API spec document which contains list of basic API calls in VMware Singtel Staging environment. Also shared Postman collection JSON to customer to proceed with Penetration testing. Jeanette confirmed that NAT IP message consumption is fine but still they’re not able to consume for runtime streaming topics. Need to have a joint troubleshooting call with customer for debugging. Airtel Production: Currently the ORC processor is filling data of widgets in Summary Page and some widgets in Reports page. Since the HDFS path in production is complex, created an intermediatory files which will be the base for other spark jobs. Currently we are providing daily Self-Healing report which gives all the devices which crossed the configured threshold in rule engine for CPU / memory / temperature / coverage hole / Interference. The requirement from Airtel is to compare the number of occurrences with more than threshold for a device with the total number of samples in cloud. Airtel reported for some of the devices the reboot counts on TR and Embedur doesn’t match. So, for some of the devices we have collected the data at regular interval on both sides for further investigation.n *Tickets information: Closed Tickets Description DEVOPS-1163 EMS Caddy Malfunction DEVOPS-1164 Setup Raspberry Pi 4 AIR-3708 LLM based resolution AIR-3709 JSON query processing AIR-3715 Day wise access point / radio details with required fields AIR-3716 Change in Self-healing report by comparing total number of samples In-Progress Tickets Description ETA AIR-3647 Create a design document and update WIKI 10/06/2024 AIR-3707 Prompt Engine for Recommendation Service 14/06/2024 SMN-739 Kafka Message consumption from VMware Prod 14/06/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 14/06/2024 SMN-744 Penetration testing for VMware Production setup 14/06/2024 AIR-3717 Reboot count analysis 14/06/2024 PTO & Vacation: Plan for next week: Data Analytics The to-do list includes creating a Spark job for hourly statistics on Radio 2.4 and 5 data, retrieving data from HDFS, converting it to CSV and ORC formats, loading it into PySpark, transforming it into a DataFrame, and updating it in Cassandra.n Penetration testing and SFTP server connection establishment Penetration testing has to be done from Customer end, as we’ve shared the API Spec Document and postman JSON collection. Since DNAT, Firewall and routing rules has been added, SFTP server connection has to be established. Need to have a troubleshooting call with customer to resolve this issue. Reboot analysis As we got the reboot dump from the TR side as well, the reboot counts for those devices will be compared with the Embedur dump. *Detailed Summary: Recommendation Service A new LLM model has been developed for faster responses, incorporating a caching mechanism to enhance speed. A new worker node has been added to the Airtel-QA3 server. New prompts and a problem classification template have been developed. The model's performance has been benchmarked, and the service is now ready for testing. Caddy Malfunction The EMS staging server at ems-staging.int.embedur.com was not operational. After flushing the DNS cache and configuring the resolvers to use Google's nameservers, the issue was resolved. It appears the problem originated from the nameserver configuration. By removing and re-adding the nameserver records, we were able to restore proper functionality. The server is now accessible and working correctly. Raspberry Pi 4 The new Raspberry Pi 4 setup is complete with Ubuntu 22.04, including all essential packages and tools for our projects. SSH is configured, ensuring secure remote access. Python 3 and necessary libraries are installed, streamlining our development process. The system is now ready for immediate use and project deployment.n Singtel Penetration Testing and Kafka Message consumption Created API Spec document which contains list of basic API calls in VMware Singtel Staging environment. API Calls related to accesspoint, wlan, client, radio, group profile and push notification feature were added as part of API Spec. Also shared Postman collection JSON file to customer to proceed with Penetration testing. Customer confirmed that NAT IP message consumption is fine, but still need to have a debugging call with them, as runtime streaming topics aren’t consumed consistently. Spark - Intermediatory file All the necessary fields for the types - radio_5, radio_24, client and accesspoint points has been included and the data for those values has been generated. A scheduler has been placed every hour to generate the intermediatory files. So, for every hour a single orc file will be created for every type - accesspoint, radio_5 etc. Self-Healing Summary The entire logic has been written in Py spark using the intermediate hourly file and the output will contains the devices if its occurrence percentage is greater than or equal to 50%. Tested the report generation for Airtel using the intermediatory files and it is working as expected. Automated the process as well, so the problematic devices table will be sent to Airtel as a part of daily self-healing summary mail. Reboot analysis Using spark fetched the reboot count data from ORC files for the conflicting devices provided by Airtel. * We have the data for the day - 05/06/2024 and the TR data has also been provided by Airtel. Need to verify the reboot counts for all the time intervals between these two files.
Pipeline for Deployment
Architecture The Deployment Architecture: 1. Source code is pushed to dev branch and application is deployed to dev server. The development teams verify their changes pushed to dev server . Once all the changes look fine the developers push the code from dev branch to QA branch. 2. The application now gets deployed in QA server where the QA team will perform Quality checks and tests. Once it is approved by QA team the maintainers push the code from QA branch to Beta server. 3. The application now gets deployed to Beta server , which will always have the working version of our app. From beta branch the code will be pushed to prod. 4. In this step the merge request has to be approved by a maintainer and once approved changes will be deployed to the production server. Frontend Pipeline The model zoo app frontend deployment pipeline consists two stages: 1. Build_model_zoo_app : To build docker image from the code changes made. 2. Deploy_model_zoo_app: To deploy the docker compose for newly built image. Gitlab-ci 1. The deploy stage must be modified and added for every environment such as dev, QA, release beta and production. 2. The refs indicate when the stage should be triggered i.e: while merged to QA will be deployed to QA respectively. 3. The Host details and credentials are set as a part of the gitlab variables only users who have authorization can view it. 4. The QA, dev branches are protected meaning only authorized maintainers can merge to that branch in order to trigger deployment. Note: Same process is followed for synaptics Backend Pipeline The Backend pipeline consists of two stages: 1. Build_model_zoo_backend : To build docker image from the code changes made. 2. Deploy_model_zoo_backend: To deploy the docker compose for newly built image. Gitlab-ci 1. The deploy stage must be modified and added for every environment such as dev, QA, release beta and production. 2. The refs indicate when the stage should be triggered i.e: while merged to QA will be deployed to QA respectively. 3. The Host details and credentials are set as a part of the gitlab variables only users who have authorization can view it. 4. The QA, dev branches are protected meaning only authorized maintainers can merge to that branch in order to trigger deployment. Note: Same process is followed for synaptics
###Human:tell me about Pipeline for Deployment ###assistant: Architecture The Deployment Architecture: 1. Source code is pushed to dev branch and application is deployed to dev server. The development teams verify their changes pushed to dev server . Once all the changes look fine the developers push the code from dev branch to QA branch. 2. The application now gets deployed in QA server where the QA team will perform Quality checks and tests. Once it is approved by QA team the maintainers push the code from QA branch to Beta server. 3. The application now gets deployed to Beta server , which will always have the working version of our app. From beta branch the code will be pushed to prod. 4. In this step the merge request has to be approved by a maintainer and once approved changes will be deployed to the production server. Frontend Pipeline The model zoo app frontend deployment pipeline consists two stages: 1. Build_model_zoo_app : To build docker image from the code changes made. 2. Deploy_model_zoo_app: To deploy the docker compose for newly built image. Gitlab-ci 1. The deploy stage must be modified and added for every environment such as dev, QA, release beta and production. 2. The refs indicate when the stage should be triggered i.e: while merged to QA will be deployed to QA respectively. 3. The Host details and credentials are set as a part of the gitlab variables only users who have authorization can view it. 4. The QA, dev branches are protected meaning only authorized maintainers can merge to that branch in order to trigger deployment. Note: Same process is followed for synaptics Backend Pipeline The Backend pipeline consists of two stages: 1. Build_model_zoo_backend : To build docker image from the code changes made. 2. Deploy_model_zoo_backend: To deploy the docker compose for newly built image. Gitlab-ci 1. The deploy stage must be modified and added for every environment such as dev, QA, release beta and production. 2. The refs indicate when the stage should be triggered i.e: while merged to QA will be deployed to QA respectively. 3. The Host details and credentials are set as a part of the gitlab variables only users who have authorization can view it. 4. The QA, dev branches are protected meaning only authorized maintainers can merge to that branch in order to trigger deployment. Note: Same process is followed for synaptics
Device Inventory
Last audited on June 12, 2024 Devices for AI/ML S.No Device Owner Current User Location Comments 1. Raspberry PI 3B+ (EURRPI-22) Keerthana/Bala Keerthana Workstation 122 Working 2. Linux Thinkpad Laptop (EURLTP -448) Keerthana/Bala Workstation 122 Working 3. EURLTP-157 (Windows 10) Jenani/Bala Jenani Lab B Working 4. EURLTP-153 (Windows 10) Jenani/Bala Jenani Lab B Working 5. EURLTP-113 (Windows 10) Jenani/Bala Jenani Lab B Working 6 Raspberry PI 4B (EURRPI-31) Suraj/Bala Suraj Lab B Working 7 PI camera Suraj/Bala Suraj Lab B Working 8 3 - PiBOX Micro HDMI cablenn1. EURCABLE-19n2. EURCABLE-20n3. EURCABLE-21 Bala Workstation 106 Working 9 Raspberry Pi 5 (EURRPI-35), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 10 Raspberry Pi 5 (EURRPI-36), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 11 Raspberry Pi 5 (EURRPI-37), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 12 3 - Raspberry PI 5MP camera board modulenn1. EURBOARD-15n2. EURBOARD-16n3. EURBOARD-17 Bala Workstation 106 Working 13 Google Coral USB Edge TPU ML Accelerator coprocessor (EURPRO-01)nnIncludes:n1. Google Coral USB Edge TPU ML Accelerator coprocessorn2. Type-C USB cable Bala Workstation 106 Working 14 2 - Robocraze USB 2.0 Mini Microphone for Raspberry Pi 3, 2 Module B & Rpi 1 Model B+/B Laptop Desktop Pcs Skype Voip Voice Recognition Softwarenn1. EURRCR-01n2. EURRCR-02 Bala Workstation 106 Working Devices for TinyML S.No Device Owner Current user Location Comments 1. Arduino Tiny Machine Learning KitArduino Nano 33 BLE Sense board9 axis inertial sensor: what makes this board ideal for wearable deviceshumidity, and temperature sensor: to get highly accurate measurements of the environmental conditionsbarometric sensor: you could make a simple weather stationmicrophone: to capture and analyse sound in real timegesture, proximity, light color and light intensity sensor : estimate the room’s luminosity, but also whether someone is moving close to the board1 OV7675 Camera1 Arduino Tiny Machine Learning Shield1 USB A to Micro USB Cable Bala Bharath Workstation 110 Working device 2 ESP32-CAM Wi-Fi Bluetooth Development Board DC 5V Dual-core Wireless with OV2640 Camera TF Card Module Bala Bharath Workstation 110 Working device 3 Breadboard 840 point with jumper wires Set (40+40+40) Bala Workstation 91 Working device 4 OV7670 Camera image sensor Bala Bharath Workstation 110 Working device Devices for SYNAPTICS AIML S.No Device Owner Current user Location Comments 1. SR110 A0(Label: 35)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cablen Bala Workstation 91 Working device 2. SR110 A0(Label: 4)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cable Bala Workstation 91 Working device 3. SR110 A0(Label: 3)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cable Bala Workstation 91 Need to flash image using JTAG 4. SR110 A0(Label: 2)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB B type cable Bala Workstation 91 Currently in not working state 5. VS680_EVK (S/N: 2021030504)nnIncludes:n 1. Power adaptern 2. Mini USB B cablen 3. SPI flashn 4. Camera interface boardn 5. JTAG interface board Bala Workstation 106 All the devices mentioned are placed inside a single box 6. J-Link (S/N: 602007378) Bala Workstation 91 Working device 7. Wipro Garnet B22 Smart LED Bulb Bala Workstation 91 Working device 8. 7 port USB hub Bala Workstation 91 Working device 9. Lenovo laptop (EURLTP - 275)n→ Windows 11 pro Bala Jeyanth Workstation 106 Working device and have USB access. 10. Lenovo laptopn(EURLTP - 256)n→ Ubuntu 20.04.6 Bala Lab B Working device 11. NUCLEO-144 STM32L4R5ZI Evaluation Board (P/N: 497-17668-ND)n→ STM32L4R5 Nucleo-144 STM32L4 ARM® Cortex®-M4 MCU 32-Bit Embedded Evaluation Boardn Bala Workstation 91 Working device 12. STM32 NUCLEO EXPANSION BOARD POW ((P/N: 497-17665-ND) Bala Workstation 91 Working device 13. Arduino UNO R3 board with DIP ATmega328P Bala Workstation 91 Working device 14. I2C Bi-Directional Logic Level Converter-4 Channel Bala Workstation 91 Working device 15. 7 Port USB hub Bala Jeyanth Workstation 91 Working device 16. Logitech BCC950 conference cam (EURVCAM-1) Bala Bala Workstation 106 Working device 17. 7 Port USB hub Bala Bala Workstation 106 Working device 18. Samsung monitor (EURMTR006) Bala Bala Workstation 106 Working device 19. Wired keyboard (11AZFITLET) and mouse Bala Bala Workstation 106 Working device 20. SL1680 EVK (S/N: K588000055)nnIncludes:n 1. USB C cable (C plugs on both ends)n 2. AC Power adaptern 3. AC plug adaptern 4. USB-A to UART bridge Bala Bala Workstation 106 Working device 21. SL1680 EVK (S/N: K588000042)nnIncludes:n 1. USB C cable (C plugs on both ends)n 2. AC Power adaptern 3. AC plug adaptern 4. USB-A to UART bridge Bala Jeyanth Workstation 91 Working device 22. Ethernet cable Bala Bala Workstation 106 Working device 23. 8- C type USB cablesnn1. EURCABLE-11n2. EURCABLE-12n3. EURCABLE-13n4. EURCABLE-14n5. EURCABLE-15n6. EURCABLE-16n7. EURCABLE-17n8. EURCABLE-18 Bala nnnJeyanthnnnnBala Workstation 91 Working device 24. Sabre Evaluation Board (S/N: AOI6059561)nnIncludes:n 1. C type USB cablen 2. B type USB cable Bala Workstation 106 Working device 25. STM32N6570-DK n(EDK32N6570)nnIncludes:n1. AI camera module (MB1854B)n2. Display (MB1860B)n3. STM32N6570 and a daughter board (C240400148) Bala Workstation 106 Working device 26. SR110 B0(Label: 8)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. 2 - USB C type cable Bala Workstation 106 Working device 27. SR110 B0(Label: 9)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. 2 - USB C type cable Bala Workstation 106 Working device ESP32-S3 Devices S.No. Device Owner Current user Location Comments 1. ESP32-S3 Boardn(EURESP-01)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device 2. ESP32-S3 Boardn(EURESP-02)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device 3. ESP32-S3 Boardn(EURESP-03)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device
###Human:tell me about Device Inventory ###assistant: Last audited on June 12, 2024 Devices for AI/ML S.No Device Owner Current User Location Comments 1. Raspberry PI 3B+ (EURRPI-22) Keerthana/Bala Keerthana Workstation 122 Working 2. Linux Thinkpad Laptop (EURLTP -448) Keerthana/Bala Workstation 122 Working 3. EURLTP-157 (Windows 10) Jenani/Bala Jenani Lab B Working 4. EURLTP-153 (Windows 10) Jenani/Bala Jenani Lab B Working 5. EURLTP-113 (Windows 10) Jenani/Bala Jenani Lab B Working 6 Raspberry PI 4B (EURRPI-31) Suraj/Bala Suraj Lab B Working 7 PI camera Suraj/Bala Suraj Lab B Working 8 3 - PiBOX Micro HDMI cablenn1. EURCABLE-19n2. EURCABLE-20n3. EURCABLE-21 Bala Workstation 106 Working 9 Raspberry Pi 5 (EURRPI-35), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 10 Raspberry Pi 5 (EURRPI-36), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 11 Raspberry Pi 5 (EURRPI-37), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 12 3 - Raspberry PI 5MP camera board modulenn1. EURBOARD-15n2. EURBOARD-16n3. EURBOARD-17 Bala Workstation 106 Working 13 Google Coral USB Edge TPU ML Accelerator coprocessor (EURPRO-01)nnIncludes:n1. Google Coral USB Edge TPU ML Accelerator coprocessorn2. Type-C USB cable Bala Workstation 106 Working 14 2 - Robocraze USB 2.0 Mini Microphone for Raspberry Pi 3, 2 Module B & Rpi 1 Model B+/B Laptop Desktop Pcs Skype Voip Voice Recognition Softwarenn1. EURRCR-01n2. EURRCR-02 Bala Workstation 106 Working Devices for TinyML S.No Device Owner Current user Location Comments 1. Arduino Tiny Machine Learning KitArduino Nano 33 BLE Sense board9 axis inertial sensor: what makes this board ideal for wearable deviceshumidity, and temperature sensor: to get highly accurate measurements of the environmental conditionsbarometric sensor: you could make a simple weather stationmicrophone: to capture and analyse sound in real timegesture, proximity, light color and light intensity sensor : estimate the room’s luminosity, but also whether someone is moving close to the board1 OV7675 Camera1 Arduino Tiny Machine Learning Shield1 USB A to Micro USB Cable Bala Bharath Workstation 110 Working device 2 ESP32-CAM Wi-Fi Bluetooth Development Board DC 5V Dual-core Wireless with OV2640 Camera TF Card Module Bala Bharath Workstation 110 Working device 3 Breadboard 840 point with jumper wires Set (40+40+40) Bala Workstation 91 Working device 4 OV7670 Camera image sensor Bala Bharath Workstation 110 Working device Devices for SYNAPTICS AIML S.No Device Owner Current user Location Comments 1. SR110 A0(Label: 35)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cablen Bala Workstation 91 Working device 2. SR110 A0(Label: 4)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cable Bala Workstation 91 Working device 3. SR110 A0(Label: 3)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cable Bala Workstation 91 Need to flash image using JTAG 4. SR110 A0(Label: 2)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB B type cable Bala Workstation 91 Currently in not working state 5. VS680_EVK (S/N: 2021030504)nnIncludes:n 1. Power adaptern 2. Mini USB B cablen 3. SPI flashn 4. Camera interface boardn 5. JTAG interface board Bala Workstation 106 All the devices mentioned are placed inside a single box 6. J-Link (S/N: 602007378) Bala Workstation 91 Working device 7. Wipro Garnet B22 Smart LED Bulb Bala Workstation 91 Working device 8. 7 port USB hub Bala Workstation 91 Working device 9. Lenovo laptop (EURLTP - 275)n→ Windows 11 pro Bala Jeyanth Workstation 106 Working device and have USB access. 10. Lenovo laptopn(EURLTP - 256)n→ Ubuntu 20.04.6 Bala Lab B Working device 11. NUCLEO-144 STM32L4R5ZI Evaluation Board (P/N: 497-17668-ND)n→ STM32L4R5 Nucleo-144 STM32L4 ARM® Cortex®-M4 MCU 32-Bit Embedded Evaluation Boardn Bala Workstation 91 Working device 12. STM32 NUCLEO EXPANSION BOARD POW ((P/N: 497-17665-ND) Bala Workstation 91 Working device 13. Arduino UNO R3 board with DIP ATmega328P Bala Workstation 91 Working device 14. I2C Bi-Directional Logic Level Converter-4 Channel Bala Workstation 91 Working device 15. 7 Port USB hub Bala Jeyanth Workstation 91 Working device 16. Logitech BCC950 conference cam (EURVCAM-1) Bala Bala Workstation 106 Working device 17. 7 Port USB hub Bala Bala Workstation 106 Working device 18. Samsung monitor (EURMTR006) Bala Bala Workstation 106 Working device 19. Wired keyboard (11AZFITLET) and mouse Bala Bala Workstation 106 Working device 20. SL1680 EVK (S/N: K588000055)nnIncludes:n 1. USB C cable (C plugs on both ends)n 2. AC Power adaptern 3. AC plug adaptern 4. USB-A to UART bridge Bala Bala Workstation 106 Working device 21. SL1680 EVK (S/N: K588000042)nnIncludes:n 1. USB C cable (C plugs on both ends)n 2. AC Power adaptern 3. AC plug adaptern 4. USB-A to UART bridge Bala Jeyanth Workstation 91 Working device 22. Ethernet cable Bala Bala Workstation 106 Working device 23. 8- C type USB cablesnn1. EURCABLE-11n2. EURCABLE-12n3. EURCABLE-13n4. EURCABLE-14n5. EURCABLE-15n6. EURCABLE-16n7. EURCABLE-17n8. EURCABLE-18 Bala nnnJeyanthnnnnBala Workstation 91 Working device 24. Sabre Evaluation Board (S/N: AOI6059561)nnIncludes:n 1. C type USB cablen 2. B type USB cable Bala Workstation 106 Working device 25. STM32N6570-DK n(EDK32N6570)nnIncludes:n1. AI camera module (MB1854B)n2. Display (MB1860B)n3. STM32N6570 and a daughter board (C240400148) Bala Workstation 106 Working device 26. SR110 B0(Label: 8)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. 2 - USB C type cable Bala Workstation 106 Working device 27. SR110 B0(Label: 9)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. 2 - USB C type cable Bala Workstation 106 Working device ESP32-S3 Devices S.No. Device Owner Current user Location Comments 1. ESP32-S3 Boardn(EURESP-01)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device 2. ESP32-S3 Boardn(EURESP-02)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device 3. ESP32-S3 Boardn(EURESP-03)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device
Model Zoo Plan
Plan: The development of Model Zoo will be planned in three phases as given below: Phase 1 Collect details regarding the models that we have and then build a ui with the details Phase 2 Make the model available for download and use the model in edge device. Phase 3 Building a PaaS like Edge Impulse Phase 1: Goal: To identify use cases and relevant models and to present the metrics of the models with the use cases Flow: The above flow chart depicts the development of model zoo in phase one. 1) IDENTIFY THE USE CASES: The initial step focuses on gathering different use cases. The use cases can be related to edge or TinyML based like WiFi , telcom and Retail usecases etc. Here are some of the identified use cases: - 1. Face Matching 2. Image Segmentation 3. Object Detection & Identification 4. Gesture Detection 5. Pose Estimation A survey (Aidan Smith) of different model-zoo instances by different chip set vendors is available here for reference: AI-ML Model Table.xlsx 5754169 We are open for suggestions; hence you can add any use case related to TinyML in here. 2) IDENTIFY MODELS FOR EACH GIVEN USE CASE: The next step is to identify the models which can be used in implementing the given use case. The model selection depends on multiple factors like: - 1. Model Size: TinyML applications require models that are lightweight and can fit within the constraints of the target device's memory and storage. The model size should be small enough to ensure efficient execution on resource-constrained hardware. 2. Computational Complexity: The model's computational complexity determines the inference time required for making predictions on the device. TinyML models should have low computational overhead to ensure real-time or near-real-time performance on the target hardware. 3. Accuracy: While model size and computational complexity are important, the model must also provide sufficient accuracy for the intended application. The selected model should achieve the desired level of performance while still meeting the constraints of the target device. 4. Data Requirements: The availability and characteristics of the training and inference data influence the choice of the model architecture and its ability to generalize to unseen examples. TinyML models should be trained on representative datasets that capture the variability and complexity of real-world inputs. 5. Latency and Throughput Requirements: Depending on the application, latency (response time) and throughput (number of inferences per second) requirements may vary. The selected model should meet these performance metrics while operating within the resource constraints of the target device. Additional models can be added in here 3) IDENTIFY METRICS FOR EACH MODEL AND DOCUMENTATION: Collecting metrics of models for embedded devices in TinyML applications involves evaluating various aspects of model performance, resource utilization, and behavior on the target hardware. We can evaluate the model metrics using platforms. 1. MLPerf: It is a benchmark suite for evaluating the performance of machine learning systems, including embedded devices. It provides standardized benchmarks across various tasks and platforms to measure inference time, accuracy, and energy consumption. 2. TensorFlow Lite Micro: It is a framework for deploying machine learning models on microcontrollers. It offers tools for profiling model performance, including inference time and memory usage, to optimize models for embedded devices. 3. Arduino Performance Monitor: A tool for profiling the performance of machine learning models deployed on Arduino boards. It helps developers measure inference time, memory usage, and other metrics to optimize model performance for resource-constrained environments. 4. Edge Impulse: Platform for developing and deploying embedded machine learning models. It provides tools for collecting and analyzing data, training models, and evaluating performance metrics such as accuracy, latency, and energy consumption on edge devices. 4) EXECUTION OF MODEL AND LOGGING METRICS / LAUNCH: The metrics identified from the above steps will be collected by executing the identified models. Also the metrics will be logged. The final step is to execute the trained model and convert the whole model resources into an artifact. These artifacts are then made available in the UI of web application and people can use the models in their personal projects or company projects. * The artifacts include all the required necessities which are used to run the model in any user’s local environment without causing any error. Milestones Description ETA 1. Model Zoo development environment with draft site in AWS. n2. APIs for model details with filtersn3. Wireframe for Model details page in Figman4. Model metrics collection for FaceNet-512 and FaceNet - 128 5th April 2024 1. Data model confirmation for Model metricsn2. UI for model list pagen3. Wireframe for model metrics page in Figman4. Model metrics for YoloV8 -Detection , YoloV8 - Segmentation and YoloV8 - Pose 12th April 2024 1. API for model metrics informationn2. UI for model details pagen3. Finalization of wireframen4. Model metrics for YolACT , YoloV5-Detection , YoloV5- Segmentation and YoloV5 - Pose 19th April 2024 1. Model metrics basic info UIn2. Basic unit testing 26th April 2024 1. Website testing with available modelsn2. Model metrics detailed info UI 3rd May 2024
###Human:tell me about Model Zoo Plan ###assistant: Plan: The development of Model Zoo will be planned in three phases as given below: Phase 1 Collect details regarding the models that we have and then build a ui with the details Phase 2 Make the model available for download and use the model in edge device. Phase 3 Building a PaaS like Edge Impulse Phase 1: Goal: To identify use cases and relevant models and to present the metrics of the models with the use cases Flow: The above flow chart depicts the development of model zoo in phase one. 1) IDENTIFY THE USE CASES: The initial step focuses on gathering different use cases. The use cases can be related to edge or TinyML based like WiFi , telcom and Retail usecases etc. Here are some of the identified use cases: - 1. Face Matching 2. Image Segmentation 3. Object Detection & Identification 4. Gesture Detection 5. Pose Estimation A survey (Aidan Smith) of different model-zoo instances by different chip set vendors is available here for reference: AI-ML Model Table.xlsx 5754169 We are open for suggestions; hence you can add any use case related to TinyML in here. 2) IDENTIFY MODELS FOR EACH GIVEN USE CASE: The next step is to identify the models which can be used in implementing the given use case. The model selection depends on multiple factors like: - 1. Model Size: TinyML applications require models that are lightweight and can fit within the constraints of the target device's memory and storage. The model size should be small enough to ensure efficient execution on resource-constrained hardware. 2. Computational Complexity: The model's computational complexity determines the inference time required for making predictions on the device. TinyML models should have low computational overhead to ensure real-time or near-real-time performance on the target hardware. 3. Accuracy: While model size and computational complexity are important, the model must also provide sufficient accuracy for the intended application. The selected model should achieve the desired level of performance while still meeting the constraints of the target device. 4. Data Requirements: The availability and characteristics of the training and inference data influence the choice of the model architecture and its ability to generalize to unseen examples. TinyML models should be trained on representative datasets that capture the variability and complexity of real-world inputs. 5. Latency and Throughput Requirements: Depending on the application, latency (response time) and throughput (number of inferences per second) requirements may vary. The selected model should meet these performance metrics while operating within the resource constraints of the target device. Additional models can be added in here 3) IDENTIFY METRICS FOR EACH MODEL AND DOCUMENTATION: Collecting metrics of models for embedded devices in TinyML applications involves evaluating various aspects of model performance, resource utilization, and behavior on the target hardware. We can evaluate the model metrics using platforms. 1. MLPerf: It is a benchmark suite for evaluating the performance of machine learning systems, including embedded devices. It provides standardized benchmarks across various tasks and platforms to measure inference time, accuracy, and energy consumption. 2. TensorFlow Lite Micro: It is a framework for deploying machine learning models on microcontrollers. It offers tools for profiling model performance, including inference time and memory usage, to optimize models for embedded devices. 3. Arduino Performance Monitor: A tool for profiling the performance of machine learning models deployed on Arduino boards. It helps developers measure inference time, memory usage, and other metrics to optimize model performance for resource-constrained environments. 4. Edge Impulse: Platform for developing and deploying embedded machine learning models. It provides tools for collecting and analyzing data, training models, and evaluating performance metrics such as accuracy, latency, and energy consumption on edge devices. 4) EXECUTION OF MODEL AND LOGGING METRICS / LAUNCH: The metrics identified from the above steps will be collected by executing the identified models. Also the metrics will be logged. The final step is to execute the trained model and convert the whole model resources into an artifact. These artifacts are then made available in the UI of web application and people can use the models in their personal projects or company projects. * The artifacts include all the required necessities which are used to run the model in any user’s local environment without causing any error. Milestones Description ETA 1. Model Zoo development environment with draft site in AWS. n2. APIs for model details with filtersn3. Wireframe for Model details page in Figman4. Model metrics collection for FaceNet-512 and FaceNet - 128 5th April 2024 1. Data model confirmation for Model metricsn2. UI for model list pagen3. Wireframe for model metrics page in Figman4. Model metrics for YoloV8 -Detection , YoloV8 - Segmentation and YoloV8 - Pose 12th April 2024 1. API for model metrics informationn2. UI for model details pagen3. Finalization of wireframen4. Model metrics for YolACT , YoloV5-Detection , YoloV5- Segmentation and YoloV5 - Pose 19th April 2024 1. Model metrics basic info UIn2. Basic unit testing 26th April 2024 1. Website testing with available modelsn2. Model metrics detailed info UI 3rd May 2024
Recommendation AI Model
Architecture Machine Learning Model (Source) This Model focuses on developing a CatBoost machine learning model to forecast whether a customer will initiate a call, leveraging their access point's runtime logs. The model is trained on a dataset comprising a set of distinct fields. By analyzing runtime logs, the model aims to discern patterns indicative of call initiation behavior. The extensive dataset ensures a comprehensive exploration of potential predictors. File Structure model.py: This file employs pickle to manage the CatBoost model, enabling call-in predictions (binary 0 or 1). It also extracts the top 5 field indexes along with their SHAP scores, offering insights into the prediction rationale. This approach facilitates interpretability and understanding of the model's reasoning process to certain extent. main.py: The main script serves as a singular hub, employing Flask to orchestrate model operations. It accepts data in JSON format via a POST method, furnishing parsed and indexed model outputs. Additionally, it correlates the results with top field indexes retrieved from a local dictionary, enhancing result interpretation and usability. Model Files model.pkl: This file contains the serialized CatBoost model. Test and Prototyping Notebooks: Jupyter notebooks for testing and prototyping purposes are available in the repository. * This has results, outcomes and analysis of different ML models with a variety of data points accustomed at different points in time. How to Run To initiate a local development Flask server, run the following command: Sample Payload using cURL You can test the service using cURL with the following sample payload: Sample Response
###Human:tell me about Recommendation AI Model ###assistant: Architecture Machine Learning Model (Source) This Model focuses on developing a CatBoost machine learning model to forecast whether a customer will initiate a call, leveraging their access point's runtime logs. The model is trained on a dataset comprising a set of distinct fields. By analyzing runtime logs, the model aims to discern patterns indicative of call initiation behavior. The extensive dataset ensures a comprehensive exploration of potential predictors. File Structure model.py: This file employs pickle to manage the CatBoost model, enabling call-in predictions (binary 0 or 1). It also extracts the top 5 field indexes along with their SHAP scores, offering insights into the prediction rationale. This approach facilitates interpretability and understanding of the model's reasoning process to certain extent. main.py: The main script serves as a singular hub, employing Flask to orchestrate model operations. It accepts data in JSON format via a POST method, furnishing parsed and indexed model outputs. Additionally, it correlates the results with top field indexes retrieved from a local dictionary, enhancing result interpretation and usability. Model Files model.pkl: This file contains the serialized CatBoost model. Test and Prototyping Notebooks: Jupyter notebooks for testing and prototyping purposes are available in the repository. * This has results, outcomes and analysis of different ML models with a variety of data points accustomed at different points in time. How to Run To initiate a local development Flask server, run the following command: Sample Payload using cURL You can test the service using cURL with the following sample payload: Sample Response
Gstreamer
GStreamer is an open-source software framework for handling multimedia like audio and video. It's like a set of building blocks that developers can use to create applications that work with multimedia. Here's a breakdown of what GStreamer does: Building blocks: GStreamer is based on small software components called elements. These elements can handle specific tasks, like reading a video file, encoding audio, or displaying video on the screen. Pipeline concept: By connecting these elements together in a specific order, developers can create pipelines that perform complex multimedia tasks. Imagine building a pipeline where one element reads a video file, another converts it to a different format, and a third one displays the converted video on your screen. Flexibility: The beauty of GStreamer is its flexibility. Developers can easily swap elements in and out of the pipeline depending on the specific needs of their application. This allows them to handle a wide variety of multimedia formats and processing tasks. Here are some real-world applications of GStreamer: Media players: Many media players use GStreamer behind the scenes to handle different video and audio formats. Streaming applications: GStreamer can be used to build applications for streaming audio and video over networks. Video editing: GStreamer can be a powerful tool for building video editing applications. Overall, GStreamer is a versatile framework that gives developers a lot of control over how they handle multimedia data. Sample pipelines for various applications Stream the video from live capturing Gst-ai application: Astra machina has a sample application called gst-ai which runs machine learning model over video frames (either static video or camera input). In sdk v0.9.0 the gst-ai application works for image classification over a static video. The command for running this application is The pipeline used in this application is Reading the video: filesrc location=test_file.mp4: This section uses a element to read the video file named "test_file.mp4". Decoding the video: ! decodebin: The decoded video stream is then fed into a element. This element automatically detects the video format and uses the appropriate decoder to convert the compressed video data into a raw format. Video conversion: ! videoconvert: This element might be for optional format conversion. Without knowing the specific output of the decoder, it's difficult to say for sure. It might be converting the raw video data to a specific format for further processing. Splitting the video stream (branching): ! tee name=t_data: This is a crucial part of the pipeline. A element is used to duplicate the video stream. The element is named "t_data" here. This creates a branching point where the video data is sent to two separate processing paths. This path takes the video data from the "t_data" branch. queue: This element might be used for buffering the video data to handle fluctuations in processing speed. videoconvert: This element likely converts the video data to RGB format, which is a common format for image processing tasks. videoscale: This element resizes the video frame to a width of 224 pixels and a height of 224 pixels. This is because the image classification model runs for 224x224 images only. This part is nothing but the preprocessing of video frames before running inference nnThis part gives the path for the model which is read from the paramfile(.json). The mode classifier specifies the post processing mode to interpret the results and the synap_sink is the name of the appsink which is used to capture the results from machine learning model and do post processing. This path runs the overlaying of results from synap_sink over the video frames. For image classification the results will be visualized as labels at the bottom of the video. Textoverlay is used for this application, and it is accessed by the name ic_label. Finally, a waylandsink fullscreen=true element displays the video with the overlaid text on the screen. The option specifies that the video should be displayed in full screen mode. Cairo overlay A Cairo overlay refers to a technique for overlaying graphical elements (text, shapes, images) on top of a video stream using the Cairo graphics library. Here's a breakdown of the concept: Components Involved: 1. GStreamer Pipeline: This is the core framework that handles video data processing. It provides elements for capturing video from webcams, files, or network streams, decoding compressed video, and displaying the video on the screen. 2. Cairo Library: This is a powerful 2D graphics library that allows you to draw various shapes, text, and images onto a surface. 3. Cairo Overlay Element: This is a specific GStreamer element (often referred to as or similar depending on the specific plugin implementation) that bridges the gap between GStreamer and Cairo. It allows you to integrate Cairo drawing functionalities within your GStreamer pipeline. Working Principle: 1. The GStreamer pipeline captures or receives a video stream. 2. The Cairo overlay element is inserted into the pipeline at a specific point where you want to add the overlay. 3. When a video frame arrives at the Cairo overlay element, two signals are emitted: "prepare": This signal provides an opportunity to set up the Cairo context for drawing on the current video frame. You can define fonts, colors, and other graphical properties here. "draw": This signal is emitted for each video frame. It provides a Cairo context and the timestamp of the frame. You can use the Cairo API to draw your desired overlay elements (text, shapes, images) onto the video frame within this callback function. Benefits of Using Cairo Overlays: Flexibility: You can create dynamic overlays that change based on the video content or user interaction. Customization: Cairo provides a rich set of functionalities for drawing various graphical elements, allowing for diverse overlay designs. Integration: It seamlessly integrates Cairo's drawing capabilities within your GStreamer video processing pipeline.
###Human:tell me about Gstreamer ###assistant: GStreamer is an open-source software framework for handling multimedia like audio and video. It's like a set of building blocks that developers can use to create applications that work with multimedia. Here's a breakdown of what GStreamer does: Building blocks: GStreamer is based on small software components called elements. These elements can handle specific tasks, like reading a video file, encoding audio, or displaying video on the screen. Pipeline concept: By connecting these elements together in a specific order, developers can create pipelines that perform complex multimedia tasks. Imagine building a pipeline where one element reads a video file, another converts it to a different format, and a third one displays the converted video on your screen. Flexibility: The beauty of GStreamer is its flexibility. Developers can easily swap elements in and out of the pipeline depending on the specific needs of their application. This allows them to handle a wide variety of multimedia formats and processing tasks. Here are some real-world applications of GStreamer: Media players: Many media players use GStreamer behind the scenes to handle different video and audio formats. Streaming applications: GStreamer can be used to build applications for streaming audio and video over networks. Video editing: GStreamer can be a powerful tool for building video editing applications. Overall, GStreamer is a versatile framework that gives developers a lot of control over how they handle multimedia data. Sample pipelines for various applications Stream the video from live capturing Gst-ai application: Astra machina has a sample application called gst-ai which runs machine learning model over video frames (either static video or camera input). In sdk v0.9.0 the gst-ai application works for image classification over a static video. The command for running this application is The pipeline used in this application is Reading the video: filesrc location=test_file.mp4: This section uses a element to read the video file named "test_file.mp4". Decoding the video: ! decodebin: The decoded video stream is then fed into a element. This element automatically detects the video format and uses the appropriate decoder to convert the compressed video data into a raw format. Video conversion: ! videoconvert: This element might be for optional format conversion. Without knowing the specific output of the decoder, it's difficult to say for sure. It might be converting the raw video data to a specific format for further processing. Splitting the video stream (branching): ! tee name=t_data: This is a crucial part of the pipeline. A element is used to duplicate the video stream. The element is named "t_data" here. This creates a branching point where the video data is sent to two separate processing paths. This path takes the video data from the "t_data" branch. queue: This element might be used for buffering the video data to handle fluctuations in processing speed. videoconvert: This element likely converts the video data to RGB format, which is a common format for image processing tasks. videoscale: This element resizes the video frame to a width of 224 pixels and a height of 224 pixels. This is because the image classification model runs for 224x224 images only. This part is nothing but the preprocessing of video frames before running inference nnThis part gives the path for the model which is read from the paramfile(.json). The mode classifier specifies the post processing mode to interpret the results and the synap_sink is the name of the appsink which is used to capture the results from machine learning model and do post processing. This path runs the overlaying of results from synap_sink over the video frames. For image classification the results will be visualized as labels at the bottom of the video. Textoverlay is used for this application, and it is accessed by the name ic_label. Finally, a waylandsink fullscreen=true element displays the video with the overlaid text on the screen. The option specifies that the video should be displayed in full screen mode. Cairo overlay A Cairo overlay refers to a technique for overlaying graphical elements (text, shapes, images) on top of a video stream using the Cairo graphics library. Here's a breakdown of the concept: Components Involved: 1. GStreamer Pipeline: This is the core framework that handles video data processing. It provides elements for capturing video from webcams, files, or network streams, decoding compressed video, and displaying the video on the screen. 2. Cairo Library: This is a powerful 2D graphics library that allows you to draw various shapes, text, and images onto a surface. 3. Cairo Overlay Element: This is a specific GStreamer element (often referred to as or similar depending on the specific plugin implementation) that bridges the gap between GStreamer and Cairo. It allows you to integrate Cairo drawing functionalities within your GStreamer pipeline. Working Principle: 1. The GStreamer pipeline captures or receives a video stream. 2. The Cairo overlay element is inserted into the pipeline at a specific point where you want to add the overlay. 3. When a video frame arrives at the Cairo overlay element, two signals are emitted: "prepare": This signal provides an opportunity to set up the Cairo context for drawing on the current video frame. You can define fonts, colors, and other graphical properties here. "draw": This signal is emitted for each video frame. It provides a Cairo context and the timestamp of the frame. You can use the Cairo API to draw your desired overlay elements (text, shapes, images) onto the video frame within this callback function. Benefits of Using Cairo Overlays: Flexibility: You can create dynamic overlays that change based on the video content or user interaction. Customization: Cairo provides a rich set of functionalities for drawing various graphical elements, allowing for diverse overlay designs. Integration: It seamlessly integrates Cairo's drawing capabilities within your GStreamer video processing pipeline.
Driver Distraction System
Usecase Description: Classifying different driver actions to find out driver is distracted or driving safely. Model used: InceptionResnetV2 Hardware Support: Raspberry PI PI Camera Base Model Github Link: Gitlab Link to Repo and Steps to Train: Backbone Research Paper: Dataset Link: Optimizations Done: 1. Performed normalizing of pixel values using preprocess technique specific to inceptionresnetv2, which applies mean subtraction and scaling to the pixel values based on the ImageNet dataset's mean and standard deviation. This preprocessing aligns the input data more closely with the training data used to train InceptionResNetV2. 2. Performed data augmentation to cover different brigthness, rotation and sizes 3. Implemented Early stopping to prevent model from overfitting 4. added a learning rate scheduler which can lead to more stable training, faster convergence, better generalization and Robustness to Hyperparameters. 5. performed transfer learning using inceptionresnetv2 where made changes in architecture of nn which is freezed initial layers and added an global_average_pooling layer and a dense layer with number of neurons equal to number of classes of use case. Download Formats Available: Tflite () Input and Output Images: Input Image: Output Image: JSON of the details (For use for pushing into DB)**
###Human:tell me about Driver Distraction System ###assistant: Usecase Description: Classifying different driver actions to find out driver is distracted or driving safely. Model used: InceptionResnetV2 Hardware Support: Raspberry PI PI Camera Base Model Github Link: Gitlab Link to Repo and Steps to Train: Backbone Research Paper: Dataset Link: Optimizations Done: 1. Performed normalizing of pixel values using preprocess technique specific to inceptionresnetv2, which applies mean subtraction and scaling to the pixel values based on the ImageNet dataset's mean and standard deviation. This preprocessing aligns the input data more closely with the training data used to train InceptionResNetV2. 2. Performed data augmentation to cover different brigthness, rotation and sizes 3. Implemented Early stopping to prevent model from overfitting 4. added a learning rate scheduler which can lead to more stable training, faster convergence, better generalization and Robustness to Hyperparameters. 5. performed transfer learning using inceptionresnetv2 where made changes in architecture of nn which is freezed initial layers and added an global_average_pooling layer and a dense layer with number of neurons equal to number of classes of use case. Download Formats Available: Tflite () Input and Output Images: Input Image: Output Image: JSON of the details (For use for pushing into DB)**
Gen AI
What is Gen AI? Types of Gen AI? What have we done? Real Time integration? How could this benefit us? How feasible is this? Phi 3 Llama Localllm OpenElm Public DataPoints Building a own model
###Human:tell me about Gen AI ###assistant: What is Gen AI? Types of Gen AI? What have we done? Real Time integration? How could this benefit us? How feasible is this? Phi 3 Llama Localllm OpenElm Public DataPoints Building a own model
Build an image V1.0.0
Reference link: https://synaptics-astra.github.io/doc/v/0.9.0/yocto.htmlhow-to-build-an-image (Not necessary - only if you’ve got additional build env, spin-up and use Docker container) To start the container, use the following command line: In Build-Server clone repository with following command: current branch - ==v1.0.0== Then, following commands to build an Image: Use in ==screen== session, due to time duration (~3 hours for Fresh Build (Complete)) ![accept the conditions with [space]](/api/attachments.redirect?id=7c9d3f53-1eb1-4d3f-844c-14857ee1bcd0 " =640x400") * Resulting ==Image== is generated in:
###Human:tell me about Build an image V1.0.0 ###assistant: Reference link: https://synaptics-astra.github.io/doc/v/0.9.0/yocto.htmlhow-to-build-an-image (Not necessary - only if you’ve got additional build env, spin-up and use Docker container) To start the container, use the following command line: In Build-Server clone repository with following command: current branch - ==v1.0.0== Then, following commands to build an Image: Use in ==screen== session, due to time duration (~3 hours for Fresh Build (Complete)) ![accept the conditions with [space]](/api/attachments.redirect?id=7c9d3f53-1eb1-4d3f-844c-14857ee1bcd0 " =640x400") * Resulting ==Image== is generated in:
[UI] Location widget
==Widget ID : 4010== Wireframe (Device info page): ==Component - Location Widget== HTML [location-widget.component.html] Style [location-widget.component.scss] The following are the different properties used in the location-widget: Property Function @media It is used for the responsive web design for different devices. !important This rule in CSS is used to add more importance to a property/value than normal. padding-top This property is to set the top padding(space) of an element. margin-top This property is to set the top margin of an element. flex-direction It is to set the direction of the flexible items font-weight It sets the size of the font [ bold/bolder/normal/ 100-900 ] border-radius This property defines the radius of the element’s corner. Typescript [location-widget.component.ts] ==Component - Tile:== HTML [tile.component.html] Typescript [tile.component.ts] Disabled for 5mins
###Human:tell me about [UI] Location widget ###assistant: ==Widget ID : 4010== Wireframe (Device info page): ==Component - Location Widget== HTML [location-widget.component.html] Style [location-widget.component.scss] The following are the different properties used in the location-widget: Property Function @media It is used for the responsive web design for different devices. !important This rule in CSS is used to add more importance to a property/value than normal. padding-top This property is to set the top padding(space) of an element. margin-top This property is to set the top margin of an element. flex-direction It is to set the direction of the flexible items font-weight It sets the size of the font [ bold/bolder/normal/ 100-900 ] border-radius This property defines the radius of the element’s corner. Typescript [location-widget.component.ts] ==Component - Tile:== HTML [tile.component.html] Typescript [tile.component.ts] Disabled for 5mins
Software Development Lifecycle - Security Guidelines
The focus is on ensuring code security by identifying and mitigating risks in project dependencies and codebases. NodeJS/JavaScript Vulnerability Scanning Tools to Use: npm audit: Built-in tool for scanning Node.js dependencies. Retire.js: Detects vulnerabilities in frontend JavaScript libraries. OWASP Dependency-Check: Identifies known vulnerabilities in project dependencies. Setup and Usage: 1. npm audit: Install: No installation needed (integrated with npm). Run: Review the report and address any vulnerabilities. 2. Retire.js: Install: Run: Document and fix any vulnerabilities found. 3. OWASP Dependency-Check: Download from OWASP Dependency-Check. Configure and run using CLI. Analyze the results and take appropriate actions. Automated Scanning for the Astral REST Server Tools to Use: OWASP ZAP: For dynamic analysis. SonarQube Community Edition: For static code analysis. Setup and Integration: 1. OWASP ZAP: Install OWASP ZAP. Integrate into GitLab CI/CD pipeline: Schedule regular scans and review the reports. 2. SonarQube: Install and configure SonarQube. Integrate with the GitLab CI/CD pipeline for static analysis: Regularly review SonarQube reports and address issues. Automated Scanning for the Astral WEB Server Tools to Use: OWASP ZAP: For dynamic web application security testing. Nikto: For web server scanning. Setup and Integration: 1. OWASP ZAP: Same setup as the REST server. 2. Nikto: Install: Run: Integrate with GitLab CI/CD for regular scanning and reporting: Golang Vulnerability Scanning Tools to Use: Gosec (Go Security Checker): For security scanning. Setup and Usage: 1. Gosec: Install: Run: Document and remediate vulnerabilities found. Automated Scanning for Golang Projects Tools to Use: Gosec Setup and Integration: 1. Gosec: Integrate into GitLab CI/CD pipeline: Configure to run on code commits, merges, or scheduled intervals. Monitor and fix issues based on the scan results. OWASP Dep-Scan OWASP Dep-Scan is a next-generation security and risk audit tool based on known vulnerabilities, advisories, and license limitations for project dependencies. Both local repositories and container images are supported, making it ideal for CI environments. Features: Scans most application code, local repos, Linux container images, Kubernetes manifests, and OS packages. Performs advanced reachability analysis for multiple languages. Fast, local package vulnerability scanning without using a server. Generates Software Bill-of-Materials (SBOM) with Vulnerability Disclosure Report (VDR) information. Creates a Common Security Advisory Framework (CSAF) 2.0 VEX document. Conducts deep package risk audits for dependency confusion attacks and maintenance risks. Usage: Dep-scan is ideal for use during continuous integration (CI) and as a local development tool. Installation: Using OCI Artifacts via ORAS CLI: Single Binary Executables: Configuring GitLab CI/CD: Add the following job to your file to integrate OWASP Dep-Scan into your pipeline: Running Dep-Scan: Dep-Scan can be run in various modes depending on your requirements. Here are some common commands: Scanning a Local Directory: Server Mode: In server mode, use the following commands: Cache the vulnerability database: Scan a local directory: Additional Options and Customization: Dep-Scan offers various customization options through environment variables and command-line arguments. For example, to enable risk audits and use a specific private namespace:
###Human:tell me about Software Development Lifecycle - Security Guidelines ###assistant: The focus is on ensuring code security by identifying and mitigating risks in project dependencies and codebases. NodeJS/JavaScript Vulnerability Scanning Tools to Use: npm audit: Built-in tool for scanning Node.js dependencies. Retire.js: Detects vulnerabilities in frontend JavaScript libraries. OWASP Dependency-Check: Identifies known vulnerabilities in project dependencies. Setup and Usage: 1. npm audit: Install: No installation needed (integrated with npm). Run: Review the report and address any vulnerabilities. 2. Retire.js: Install: Run: Document and fix any vulnerabilities found. 3. OWASP Dependency-Check: Download from OWASP Dependency-Check. Configure and run using CLI. Analyze the results and take appropriate actions. Automated Scanning for the Astral REST Server Tools to Use: OWASP ZAP: For dynamic analysis. SonarQube Community Edition: For static code analysis. Setup and Integration: 1. OWASP ZAP: Install OWASP ZAP. Integrate into GitLab CI/CD pipeline: Schedule regular scans and review the reports. 2. SonarQube: Install and configure SonarQube. Integrate with the GitLab CI/CD pipeline for static analysis: Regularly review SonarQube reports and address issues. Automated Scanning for the Astral WEB Server Tools to Use: OWASP ZAP: For dynamic web application security testing. Nikto: For web server scanning. Setup and Integration: 1. OWASP ZAP: Same setup as the REST server. 2. Nikto: Install: Run: Integrate with GitLab CI/CD for regular scanning and reporting: Golang Vulnerability Scanning Tools to Use: Gosec (Go Security Checker): For security scanning. Setup and Usage: 1. Gosec: Install: Run: Document and remediate vulnerabilities found. Automated Scanning for Golang Projects Tools to Use: Gosec Setup and Integration: 1. Gosec: Integrate into GitLab CI/CD pipeline: Configure to run on code commits, merges, or scheduled intervals. Monitor and fix issues based on the scan results. OWASP Dep-Scan OWASP Dep-Scan is a next-generation security and risk audit tool based on known vulnerabilities, advisories, and license limitations for project dependencies. Both local repositories and container images are supported, making it ideal for CI environments. Features: Scans most application code, local repos, Linux container images, Kubernetes manifests, and OS packages. Performs advanced reachability analysis for multiple languages. Fast, local package vulnerability scanning without using a server. Generates Software Bill-of-Materials (SBOM) with Vulnerability Disclosure Report (VDR) information. Creates a Common Security Advisory Framework (CSAF) 2.0 VEX document. Conducts deep package risk audits for dependency confusion attacks and maintenance risks. Usage: Dep-scan is ideal for use during continuous integration (CI) and as a local development tool. Installation: Using OCI Artifacts via ORAS CLI: Single Binary Executables: Configuring GitLab CI/CD: Add the following job to your file to integrate OWASP Dep-Scan into your pipeline: Running Dep-Scan: Dep-Scan can be run in various modes depending on your requirements. Here are some common commands: Scanning a Local Directory: Server Mode: In server mode, use the following commands: Cache the vulnerability database: Scan a local directory: Additional Options and Customization: Dep-Scan offers various customization options through environment variables and command-line arguments. For example, to enable risk audits and use a specific private namespace:
User Login API Flow
Flow Diagram:
###Human:tell me about User Login API Flow ###assistant: Flow Diagram:
Weekly Status Report -6th June
Executive Summary: Model Zoo: Worked on setting up the caching and the optimizations for the landing and the models listing pages. Brought out the APIs and the initial page for the model details and the metrics. Worked on bringing up the sign-in pages for the synaptics site with the APIs for the relevant tasks. Setup the pipeline for the SL1680 metrics and started collecting the metrics for the models from SL1680. Collected general model metrics and the usecase model metrics and pushed to the DB. Accomplishments: Dev: AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-398: Planning of DB data model for tenancy support – ==Closed.== AI-408: Plan for metrics gathering in RPI and SL1680 – ==Closed.== AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==Closed.== AI-413: Setup of SL1860 board for executing the models – ==Closed.== AI-452: Benchmark metrics for SL1680 – ==Closed.== AI-356: Backend APIs for landing page with unit testing – ==Closed.== AI-366: Backend APIs for model listing and models search – ==Closed.== AI-420: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-431: Metrics collection of use case - RPI – ==Closed.== AI-454: Curation of model details for each of the selected models - 5 models – ==Closed.== AI-357: UI for Landing Page – ==Closed.== AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-405: Face Matching System - Perform live matching and detection of faces – ==Closed.== AI-406: Metrics collection of use case - RPI – ==Closed.== AI-453- API request memoization and caching data– ==Closed.== AI-377 - Packaging and Bundling of CLI tools with documentation on usage– ==Closed.== AI-375-CLI tool support for deployment into Rapsberry Pi– ==Closed.== AI-393: Collection of model metrics for Synaptics 5 models – ==In Progress.== AI-448: Plan for DB backup using PG backup – ==In Progress.== AI-358:UI for user sign-up / login page with unit testing – ==In Progress.== AI-360: Backend APIs for User sign up and Login unit testing – ==In Progress.== AI-365: APIs for model details, metrics and download options – ==In Progress.== AI-363: UI for model details, metrics and download options – ==In Progress.== AI-368: UI for Other page– ==In Progress== AI-369: APIs for Other pages– ==In Progress== DevOps: AI-387: CI/CD pipeline to push model metrics to DB – ==Closed.== AI-390: Curation of model details for each of the selected models - 5 models - ==Closed.== AI-456: Pipeline for collection of metrics from SL1680 - ==Closed.== AI-381: Deployment pipeline setup of public model zoo site - Prod – ==In Progress.== AI-382: Planning of deployment of model zoos for synaptics - Prod – ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Usecase execution for Obstacle detection and CSI based person detection usecases Collection of models and metrics for RPI and SL1680 for the next set of models. Bug fixing on the UI and the API part. Planning for deployment of the public and synaptics sites. Plan for DB migration for the various instances. PTO and Vacation: Jenani: 03rd June Sanjay: 04th June DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-387: CI/CD pipeline to push model metrics to DB – ==Closed.== Modified the API endpoints based on new db schema in the pipeline. Completed the metrics collection using pipeline. [JENANI SAMPATH] AI-390: Curation of model details for each of the selected models - 5 models - ==Closed.== Completed the metrics collection for 5 models, Yolov8face,wake_word tflite,yolact,Mobilenetv3 large,rtdetr. Collected model details and images for all of the above models and added it in details.txt. Pipeline was successful. [JENANI SAMPATH] AI-456: Pipeline for collection of metrics from SL1680 - ==Closed.== Configured pipeline to collect metrics directly from sl1680 board. Tested the working and updated documentation for the same. [SURAJRAM SRINIVASAN] AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== Setup the pipeline for collecting the metrics using the MLTK profiler. Collected the model metrics and pushed the details of the model to the DB. [SURAJRAM SRINIVASAN] AI-398: Planning of DB data model for tenancy support – ==Closed.== Finalized and planned on the DB data model for the synaptics site. Decided on the adding a new schema for the synaptics site and modified the backend accordingly. [SURAJRAM SRINIVASAN] AI-408: Plan for metrics gathering in RPI and SL1680 – ==Closed.== Explored on the profiler tools for gathering metrics from RPI and SL1680. Finalized on the metrics pipeline and working on the metrics collection from SL1680 for a sample usecase [SURAJRAM SRINIVASAN] AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==Closed.== Identified the different models and the usecases that be brought out for the model zoo site Drafted a table with the general models , the artifacts and the other related details for the general models and sent it out for review. Followed the same steps to get a collection of the usecase models and frame the usecases as well. [SURAJRAM SRINIVASAN] AI-413: Setup of SL1860 board for executing the models – ==Closed.== Installed the necessary packages and setup the SL1680 board for metrics collection . Tested the pipeline and verified the execution of the models. [ANEES ASLAM] AI-452: Benchmark metrics for SL1680 – ==Closed.== Completed and streamlined benchmarking for SL1680 [KEERTHANA SUNDARBABU] AI-356: Backend APIs for landing page with unit testing – ==Closed.== Modified all the landing page APIs based on new Schema New API spec: All the APIS have been modified according to sequelize ORM and deployed to all 3 servers Shared Unit test results for both landing and listing page on jira [KEERTHANA SUNDARBABU] AI-366: Backend APIs for model listing and models searchn – ==Closed.== Search API has been added in the backend /api/models?search= The api fetches the model list based on whether the search param is model_name or usecase_name The search API is also sequelized and tested and deployed. [JAYAPRAKASH MASILAMANI] AI-420: Collection of model metrics for the set of selected models including RPI – ==Closed.== Collected the model metrics for different Yolo models like instance segmentation, object detection, pose estimation, face detection and Fast SAM segmentation through MLTK profiling. Pushed all the model metrics to the pipeline and loaded in the database. [JAYAPRAKASH MASILAMANI] AI-431: Metrics collection of use case - RPI – ==Closed.== Generated the model metrics for Inception Resnetv2 model of Driver Distraction Usecase. Pushed the model metrics via CI pipeline and loaded the same in DB. [JAYAPRAKASH MASILAMANI] AI-454: Curation of model details for each of the selected models - 5 models – ==Closed.== Collected the model metrics for pytorch models such as Alexnet, SqueezeNet, ShuffleNet, MobileNetV3 and ResNet50 through MLTK profiling. Pushed all the model metrics to the pipeline and loaded in the database. [SWATHI RAJAMANI] AI-357: UI for Landing Page – ==Closed.== Updated landing page and search page based on new api changes Added pagination for model listing Added filter functionalities for model filtering. [SHAIK SHUAIB AFNAN] AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== Pushed metrics of ResNet101, ShuffleNetV2 & MobileNetV3 pytorch models into ai-model-zoo repository. [SHAIK SHUAIB AFNAN] AI-405: Face Matching System - Perform live matching and detection of faces – ==Closed.== Pulled the data of face-matching into amazon s3 bucket using dvc and pulled it for using for inference. Wrote script to log the parameters, metrics and artifacts into mlflow. Performed inference of the face matching system using the data pulled from dvc and logged training info into mlflow. Added details on how to perform the inference into readme file. [SHAIK SHUAIB AFNAN] AI-406: Metrics collection of use case - RPI – ==Closed.== Pushed the use-case metrics from raspberry pi for face-matching system into ai-model-zoo repository successfully. The metrcis are collected using the tflite file. [SANJAY RAVI] AI-453- API request memoization and caching data– ==Closed.== Implemented Caching for Home and Models Pages Implemented Animations and Transitions [SANJAY RAVI] AI-377 - Packaging and Bundling of CLI tools with documentation on usage– ==Closed.== Package and bundling of the CLI tool has been done and has been pushed to PyPI. [SANJAY RAVI] AI-375-CLI tool support for deployment into Rapsberry Pi– ==Closed.== CLI Support for deployment in RPI has been completed. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-381: Deployment pipeline setup of public model zoo site - Prod – ==In Progress.== Created deployment pipeline for each environments dev,qa,release beta and production respectively. Need to update gitlab variables for release beta and prod once setup is ready. [ETA - 12/06] [JENANI SAMPATH] AI-382: Planning of deployment of model zoos for synaptics - Prod – ==In Progress.== Created deployment pipeline for each environments dev,qa,release beta and production respectively. Need to update gitlab variables for release beta and prod once setup is ready. [ETA - 12/06] [SURAJRAM SRINIVASAN] AI-393: Collection of model metrics for Synaptics 5 models – ==In Progress.== Started collecting the metrics for the Synaptics models using the SL1680 pipeline. Working on collecting the metrics for the next set of models [ETA - 07/06 ] [SURAJRAM SRINIVASAN] AI-448: Plan for DB backup using PG backup – ==In Progress.== Tested on using the PG backup and PG restore for backup ofthe DB data. Working on writing a CRON script to trigger a backup on a daily basis. [ETA - 07/06] [KEERTHANA SUNDARBABU] AI-358:UI for user sign-up / login page with unit testingn – ==In Progress.== Working on designing user login page UI Improving the UI and working on the UI flow Working on integrating with backend user authentication using acces token [ETA - 07/06] [JAYAPRAKASH MASILAMANI] AI-360: Backend APIs for User sign up and Login unit testing – ==In Progress.== Created a user pool in AWS Cognito for the User management module for the Synaptics ModelZoo. Completed the implementation of APIs for User SignUp, User Login and User Logout using CognitoServiceProvider module in the backend. Implemented the authentication Middleware for verify the Cognito Access Token using CognitoJwtVerifier module for all the Fetch APIs in the backend. Working on the Unit test implementation of the User login ang logout APIs. [ETA - 07/06 ] [SWATHI RAJAMANI] AI-365: APIs for model details, metrics and download options – ==In Progress.== Added API route for model details and metrics Made schema changes [ETA - 07/06 ] [SWATHI RAJAMANI] AI-363: UI for model details, metrics and download options – ==In Progress.== Updated Model details page from the API references [ETA - 07/06 ] [SANJAY RAVI] AI-368: UI for Other page– ==In Progress== Need of implement UI components for Other Page with Proper Transitions [ETA - 07/06 ] [SANJAY RAVI] AI-369: APIs for Other pages– ==In Progress== Updating API for model zoo Other Pages [ETA - 07/06]
###Human:tell me about Weekly Status Report -6th June ###assistant: Executive Summary: Model Zoo: Worked on setting up the caching and the optimizations for the landing and the models listing pages. Brought out the APIs and the initial page for the model details and the metrics. Worked on bringing up the sign-in pages for the synaptics site with the APIs for the relevant tasks. Setup the pipeline for the SL1680 metrics and started collecting the metrics for the models from SL1680. Collected general model metrics and the usecase model metrics and pushed to the DB. Accomplishments: Dev: AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-398: Planning of DB data model for tenancy support – ==Closed.== AI-408: Plan for metrics gathering in RPI and SL1680 – ==Closed.== AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==Closed.== AI-413: Setup of SL1860 board for executing the models – ==Closed.== AI-452: Benchmark metrics for SL1680 – ==Closed.== AI-356: Backend APIs for landing page with unit testing – ==Closed.== AI-366: Backend APIs for model listing and models search – ==Closed.== AI-420: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-431: Metrics collection of use case - RPI – ==Closed.== AI-454: Curation of model details for each of the selected models - 5 models – ==Closed.== AI-357: UI for Landing Page – ==Closed.== AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-405: Face Matching System - Perform live matching and detection of faces – ==Closed.== AI-406: Metrics collection of use case - RPI – ==Closed.== AI-453- API request memoization and caching data– ==Closed.== AI-377 - Packaging and Bundling of CLI tools with documentation on usage– ==Closed.== AI-375-CLI tool support for deployment into Rapsberry Pi– ==Closed.== AI-393: Collection of model metrics for Synaptics 5 models – ==In Progress.== AI-448: Plan for DB backup using PG backup – ==In Progress.== AI-358:UI for user sign-up / login page with unit testing – ==In Progress.== AI-360: Backend APIs for User sign up and Login unit testing – ==In Progress.== AI-365: APIs for model details, metrics and download options – ==In Progress.== AI-363: UI for model details, metrics and download options – ==In Progress.== AI-368: UI for Other page– ==In Progress== AI-369: APIs for Other pages– ==In Progress== DevOps: AI-387: CI/CD pipeline to push model metrics to DB – ==Closed.== AI-390: Curation of model details for each of the selected models - 5 models - ==Closed.== AI-456: Pipeline for collection of metrics from SL1680 - ==Closed.== AI-381: Deployment pipeline setup of public model zoo site - Prod – ==In Progress.== AI-382: Planning of deployment of model zoos for synaptics - Prod – ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Usecase execution for Obstacle detection and CSI based person detection usecases Collection of models and metrics for RPI and SL1680 for the next set of models. Bug fixing on the UI and the API part. Planning for deployment of the public and synaptics sites. Plan for DB migration for the various instances. PTO and Vacation: Jenani: 03rd June Sanjay: 04th June DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-387: CI/CD pipeline to push model metrics to DB – ==Closed.== Modified the API endpoints based on new db schema in the pipeline. Completed the metrics collection using pipeline. [JENANI SAMPATH] AI-390: Curation of model details for each of the selected models - 5 models - ==Closed.== Completed the metrics collection for 5 models, Yolov8face,wake_word tflite,yolact,Mobilenetv3 large,rtdetr. Collected model details and images for all of the above models and added it in details.txt. Pipeline was successful. [JENANI SAMPATH] AI-456: Pipeline for collection of metrics from SL1680 - ==Closed.== Configured pipeline to collect metrics directly from sl1680 board. Tested the working and updated documentation for the same. [SURAJRAM SRINIVASAN] AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== Setup the pipeline for collecting the metrics using the MLTK profiler. Collected the model metrics and pushed the details of the model to the DB. [SURAJRAM SRINIVASAN] AI-398: Planning of DB data model for tenancy support – ==Closed.== Finalized and planned on the DB data model for the synaptics site. Decided on the adding a new schema for the synaptics site and modified the backend accordingly. [SURAJRAM SRINIVASAN] AI-408: Plan for metrics gathering in RPI and SL1680 – ==Closed.== Explored on the profiler tools for gathering metrics from RPI and SL1680. Finalized on the metrics pipeline and working on the metrics collection from SL1680 for a sample usecase [SURAJRAM SRINIVASAN] AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==Closed.== Identified the different models and the usecases that be brought out for the model zoo site Drafted a table with the general models , the artifacts and the other related details for the general models and sent it out for review. Followed the same steps to get a collection of the usecase models and frame the usecases as well. [SURAJRAM SRINIVASAN] AI-413: Setup of SL1860 board for executing the models – ==Closed.== Installed the necessary packages and setup the SL1680 board for metrics collection . Tested the pipeline and verified the execution of the models. [ANEES ASLAM] AI-452: Benchmark metrics for SL1680 – ==Closed.== Completed and streamlined benchmarking for SL1680 [KEERTHANA SUNDARBABU] AI-356: Backend APIs for landing page with unit testing – ==Closed.== Modified all the landing page APIs based on new Schema New API spec: All the APIS have been modified according to sequelize ORM and deployed to all 3 servers Shared Unit test results for both landing and listing page on jira [KEERTHANA SUNDARBABU] AI-366: Backend APIs for model listing and models searchn – ==Closed.== Search API has been added in the backend /api/models?search= The api fetches the model list based on whether the search param is model_name or usecase_name The search API is also sequelized and tested and deployed. [JAYAPRAKASH MASILAMANI] AI-420: Collection of model metrics for the set of selected models including RPI – ==Closed.== Collected the model metrics for different Yolo models like instance segmentation, object detection, pose estimation, face detection and Fast SAM segmentation through MLTK profiling. Pushed all the model metrics to the pipeline and loaded in the database. [JAYAPRAKASH MASILAMANI] AI-431: Metrics collection of use case - RPI – ==Closed.== Generated the model metrics for Inception Resnetv2 model of Driver Distraction Usecase. Pushed the model metrics via CI pipeline and loaded the same in DB. [JAYAPRAKASH MASILAMANI] AI-454: Curation of model details for each of the selected models - 5 models – ==Closed.== Collected the model metrics for pytorch models such as Alexnet, SqueezeNet, ShuffleNet, MobileNetV3 and ResNet50 through MLTK profiling. Pushed all the model metrics to the pipeline and loaded in the database. [SWATHI RAJAMANI] AI-357: UI for Landing Page – ==Closed.== Updated landing page and search page based on new api changes Added pagination for model listing Added filter functionalities for model filtering. [SHAIK SHUAIB AFNAN] AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== Pushed metrics of ResNet101, ShuffleNetV2 & MobileNetV3 pytorch models into ai-model-zoo repository. [SHAIK SHUAIB AFNAN] AI-405: Face Matching System - Perform live matching and detection of faces – ==Closed.== Pulled the data of face-matching into amazon s3 bucket using dvc and pulled it for using for inference. Wrote script to log the parameters, metrics and artifacts into mlflow. Performed inference of the face matching system using the data pulled from dvc and logged training info into mlflow. Added details on how to perform the inference into readme file. [SHAIK SHUAIB AFNAN] AI-406: Metrics collection of use case - RPI – ==Closed.== Pushed the use-case metrics from raspberry pi for face-matching system into ai-model-zoo repository successfully. The metrcis are collected using the tflite file. [SANJAY RAVI] AI-453- API request memoization and caching data– ==Closed.== Implemented Caching for Home and Models Pages Implemented Animations and Transitions [SANJAY RAVI] AI-377 - Packaging and Bundling of CLI tools with documentation on usage– ==Closed.== Package and bundling of the CLI tool has been done and has been pushed to PyPI. [SANJAY RAVI] AI-375-CLI tool support for deployment into Rapsberry Pi– ==Closed.== CLI Support for deployment in RPI has been completed. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-381: Deployment pipeline setup of public model zoo site - Prod – ==In Progress.== Created deployment pipeline for each environments dev,qa,release beta and production respectively. Need to update gitlab variables for release beta and prod once setup is ready. [ETA - 12/06] [JENANI SAMPATH] AI-382: Planning of deployment of model zoos for synaptics - Prod – ==In Progress.== Created deployment pipeline for each environments dev,qa,release beta and production respectively. Need to update gitlab variables for release beta and prod once setup is ready. [ETA - 12/06] [SURAJRAM SRINIVASAN] AI-393: Collection of model metrics for Synaptics 5 models – ==In Progress.== Started collecting the metrics for the Synaptics models using the SL1680 pipeline. Working on collecting the metrics for the next set of models [ETA - 07/06 ] [SURAJRAM SRINIVASAN] AI-448: Plan for DB backup using PG backup – ==In Progress.== Tested on using the PG backup and PG restore for backup ofthe DB data. Working on writing a CRON script to trigger a backup on a daily basis. [ETA - 07/06] [KEERTHANA SUNDARBABU] AI-358:UI for user sign-up / login page with unit testingn – ==In Progress.== Working on designing user login page UI Improving the UI and working on the UI flow Working on integrating with backend user authentication using acces token [ETA - 07/06] [JAYAPRAKASH MASILAMANI] AI-360: Backend APIs for User sign up and Login unit testing – ==In Progress.== Created a user pool in AWS Cognito for the User management module for the Synaptics ModelZoo. Completed the implementation of APIs for User SignUp, User Login and User Logout using CognitoServiceProvider module in the backend. Implemented the authentication Middleware for verify the Cognito Access Token using CognitoJwtVerifier module for all the Fetch APIs in the backend. Working on the Unit test implementation of the User login ang logout APIs. [ETA - 07/06 ] [SWATHI RAJAMANI] AI-365: APIs for model details, metrics and download options – ==In Progress.== Added API route for model details and metrics Made schema changes [ETA - 07/06 ] [SWATHI RAJAMANI] AI-363: UI for model details, metrics and download options – ==In Progress.== Updated Model details page from the API references [ETA - 07/06 ] [SANJAY RAVI] AI-368: UI for Other page– ==In Progress== Need of implement UI components for Other Page with Proper Transitions [ETA - 07/06 ] [SANJAY RAVI] AI-369: APIs for Other pages– ==In Progress== Updating API for model zoo Other Pages [ETA - 07/06]
User Login APIs Flow
Flow Diagram:
###Human:tell me about User Login APIs Flow ###assistant: Flow Diagram:
Model Metrics
The model metrics is updated to db and mlflow using CI/CD pipeline. Repo : ai-model-zoo [] Workflow The CI/CD pipeline consists of two stages generate metrics and upload to mlflow-db respectively. 1. For Model-zoo Public 1. Checkout a branch from updated_metrics branch and name the branch in this format . 2. In that branch edit the details file in the details directory with the relevant details about the model and rename it to your model's name. 3. While adding the model download urls make sure to add all the formats available 4. Add the model file in model directory. 5. Push the changes to trigger the pipeline. 6. Verify the pipeline is successful in gitlab. Gitlab-ci for model metrics collection [model-zoo public] Synaptics sl1680 pipeline for metric collections The CI/CD pipeline consists of two stages generate metrics in sl1680 and upload to mlflow-db respectively. 1. Checkout a branch from sl1680_pipeline branch and name the branch in this format . 2. In that branch edit the details file in the details directory with the relevant details about the model and rename it to your model's name. 3. While adding the model download urls make sure to add all the formats available 4. Add the model file in model directory. 5. Push the changes to trigger the pipeline. 6. Verify the pipeline is successful in gitlab. Gitlab-ci for sl1680 pipeline metrics collection
###Human:tell me about Model Metrics ###assistant: The model metrics is updated to db and mlflow using CI/CD pipeline. Repo : ai-model-zoo [] Workflow The CI/CD pipeline consists of two stages generate metrics and upload to mlflow-db respectively. 1. For Model-zoo Public 1. Checkout a branch from updated_metrics branch and name the branch in this format . 2. In that branch edit the details file in the details directory with the relevant details about the model and rename it to your model's name. 3. While adding the model download urls make sure to add all the formats available 4. Add the model file in model directory. 5. Push the changes to trigger the pipeline. 6. Verify the pipeline is successful in gitlab. Gitlab-ci for model metrics collection [model-zoo public] Synaptics sl1680 pipeline for metric collections The CI/CD pipeline consists of two stages generate metrics in sl1680 and upload to mlflow-db respectively. 1. Checkout a branch from sl1680_pipeline branch and name the branch in this format . 2. In that branch edit the details file in the details directory with the relevant details about the model and rename it to your model's name. 3. While adding the model download urls make sure to add all the formats available 4. Add the model file in model directory. 5. Push the changes to trigger the pipeline. 6. Verify the pipeline is successful in gitlab. Gitlab-ci for sl1680 pipeline metrics collection
Status - Week ending on 31/05
*Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: ecloud repository’s Make file has now been updated to facilitate the new tags and versioning structure. Making it compatible with CICD-Maestro Internal Environments - QA/Dev: A recommendation ML service has been dockerized and deployed on the cloud-devel server for testing and demonstrations. All required fields for the ML model have been well-defined, and a private library has been written in Python to process the JSON input and transform it into a 2D labeled array for the model to make predictions. The JSON output response is now structured and appropriately handles error messages. Create a new Infra with USP-Controller coexisting with Astral services to be Shared with Airtel. Singtel Production: With the help of existing certs in Huawei Production environment, we created temporary domain for REST and WEB service in VMware setup for *“penetration-testing” purpose. Singtel could consume messages for all of the runtime-streaming and push-notification-feature topics from their Kafka brokers. Still, troubleshooting is in progress from their end for why the messages aren’t been consumed through NAT IP but instead via private subnet range. ELK stack has been deployed in VMware Production environment. Airtel Production: For Zyxel & Dp devices the band-steering configuration is not enabled at the profile level. Created a new profile & analytic group to test the band-steering configuration for around 1k Zyxel devices. As we have started logging the reboot count of devices per day, also added it over the weekly report as well - devices should be rebooted at least 10 times a day and should be repeated for at least 3 times over a week. Airtel production cloud has been upgraded to 2.10.0 Developed a Docker image using Python 3.8 and all necessary dependencies. Successfully deployed it to production. Automated script retrieves fingerprint data hourly for 6 hours, then combines those temporary files into a single daily ORC file.n *Tickets information: Closed Tickets Description AIR-3643 Create & train model for providing top 5 recommended action AIR-3533 JSON query processing DEVOPS-1116 Refactor Make file in ecloud SMN-743 Resurrect ELK stack in VMware Singtel Prod AIR-3734 Enable Band-Steering configuration AIR-3735 Add the reboot count of devices DEVOPS-1159 Upgrade Airtel Production cloud to 2.10.0 DEVOPS-1161 New instance for Astral-USP deployment AIR-3737 Docker image for Python packages In-Progress Tickets Description ETA AIR-3533 LLM based resolution 07/06/2024 AIR-3533 Prompt Engine for Recommendation Service 07/06/2024 SMN-739 Kafka Message consumption from VMware Prod 07/06/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 07/06/2024 SMN-744 Penetration testing for VMware Production setup 07/06/2024 AIR-3738 Script for Fingerprint report using Spark 6hours and one hour 05/06/2024 PTO & Vacation: Bharath Rajiv A - 31/05/2024 Plan for next week: Recommendation Service To develop a model for recommendation services, we will begin by defining the response and finalizing the service architecture. Next, we will make plans for integrating machine learning models with the service and identify the required parameters. Finally, we will finalize the model, response, and input JSON format. To develop a Large Language Model (LLM) capable of offering solutions to access point problems, we will leverage the Cat Boost model we have already trained. This LLM will utilize the insights of the Cat Boost model to identify and suggest resolutions to issues related to access points. By integrating the strengths of both models, we aim to enhance our ability to troubleshoot and address these problems more effectively and efficiently. Penetration testing and SFTP connection establishment Start with data migration activity, as we’ve created temporary domain for REST and WEB service in VMware setup for penetration testing purpose. Ensure SFTP server connection has been established from backup server by modifying certain firewall and NAT rules. Logic change in monitoring data Instead of sending the monitoring files generated by self-healing services, need to collect the key values from ORC data using spark. Need to only consider the devices which exceeds the threshold more than 50% out of all the data collected in runtime. Reboot Counts Airtel reported for some of the devices the reboot counts on TR and Embedur doesn’t match. So, for some of the devices we will collect the data at regular interval on both sides for further investigation. Detailed Summary: Recommendation Service A recommendation ML service has been successfully dockerized and deployed on the cloud-devel server for testing and demonstration purposes. All required fields for the model have been meticulously defined to ensure accuracy and consistency. A private library, developed in Python, processes the JSON input by transforming it into a 2D labeled array that the model can utilize to generate predictions. The output response from the model is now structured in JSON format, ensuring clarity and ease of interpretation. Additionally, error handling mechanisms have been implemented to manage and report any issues that arise during the prediction process. This service is currently undergoing testing, and active changes and tweaks are being made by the development team. VMware Penetration testing and Kafka Message consumption From existing certs in Huawei Production environment, created temporary domain for REST () and WEB service () for VMware Prod setup. Francheska from Singtel end ensured that all the messages are being consumed from push-notification service and runtime-streaming topics. ELK Stack setup in Singtel VMware Prod Services pertaining to ELK stack setup has been deployed in VMware Production environment. Ensured that the functionality for Elastic search, Log stash, Kibana and file beat works as expected. Band-Steering Created a new analytic group and profile called - "Zyxel_band_steering" with the band steering enabled on RRM configuration. Using REST APIs changed the analytic group of the 1k Zyxel devices. Self-healing weekly report Used the self-healing reports for the past 7 days which contains the devices rebooted at least 10 times a day. Used a python script to fetch the repeated cases over the period of last days. Cloud Upgrade On 30/5/24 Airtel production cloud has been upgraded to 2.10.0. In this release we have done some improvisations on client model. Geo-location service will be deployed once Airtel creates the policy in firewall. Python Package in Production Built and deployed a Docker image to production. The image utilizes a Python 3.8 base and incorporates all essential Python libraries for the application. Fingerprint Report This script automates the collection and consolidation of fingerprint data for a 24-hour period. It gathers data hourly for the first 6 hours and merges them into a single, daily ORC file for further analysis. New instance for Airtel-Astral-USP A new instance with Astral services was spun-up with our Terraform-IaC. USP services were deployed with certs compatible with TR369 device.
###Human:tell me about Status - Week ending on 31/05 ###assistant: *Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: ecloud repository’s Make file has now been updated to facilitate the new tags and versioning structure. Making it compatible with CICD-Maestro Internal Environments - QA/Dev: A recommendation ML service has been dockerized and deployed on the cloud-devel server for testing and demonstrations. All required fields for the ML model have been well-defined, and a private library has been written in Python to process the JSON input and transform it into a 2D labeled array for the model to make predictions. The JSON output response is now structured and appropriately handles error messages. Create a new Infra with USP-Controller coexisting with Astral services to be Shared with Airtel. Singtel Production: With the help of existing certs in Huawei Production environment, we created temporary domain for REST and WEB service in VMware setup for *“penetration-testing” purpose. Singtel could consume messages for all of the runtime-streaming and push-notification-feature topics from their Kafka brokers. Still, troubleshooting is in progress from their end for why the messages aren’t been consumed through NAT IP but instead via private subnet range. ELK stack has been deployed in VMware Production environment. Airtel Production: For Zyxel & Dp devices the band-steering configuration is not enabled at the profile level. Created a new profile & analytic group to test the band-steering configuration for around 1k Zyxel devices. As we have started logging the reboot count of devices per day, also added it over the weekly report as well - devices should be rebooted at least 10 times a day and should be repeated for at least 3 times over a week. Airtel production cloud has been upgraded to 2.10.0 Developed a Docker image using Python 3.8 and all necessary dependencies. Successfully deployed it to production. Automated script retrieves fingerprint data hourly for 6 hours, then combines those temporary files into a single daily ORC file.n *Tickets information: Closed Tickets Description AIR-3643 Create & train model for providing top 5 recommended action AIR-3533 JSON query processing DEVOPS-1116 Refactor Make file in ecloud SMN-743 Resurrect ELK stack in VMware Singtel Prod AIR-3734 Enable Band-Steering configuration AIR-3735 Add the reboot count of devices DEVOPS-1159 Upgrade Airtel Production cloud to 2.10.0 DEVOPS-1161 New instance for Astral-USP deployment AIR-3737 Docker image for Python packages In-Progress Tickets Description ETA AIR-3533 LLM based resolution 07/06/2024 AIR-3533 Prompt Engine for Recommendation Service 07/06/2024 SMN-739 Kafka Message consumption from VMware Prod 07/06/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 07/06/2024 SMN-744 Penetration testing for VMware Production setup 07/06/2024 AIR-3738 Script for Fingerprint report using Spark 6hours and one hour 05/06/2024 PTO & Vacation: Bharath Rajiv A - 31/05/2024 Plan for next week: Recommendation Service To develop a model for recommendation services, we will begin by defining the response and finalizing the service architecture. Next, we will make plans for integrating machine learning models with the service and identify the required parameters. Finally, we will finalize the model, response, and input JSON format. To develop a Large Language Model (LLM) capable of offering solutions to access point problems, we will leverage the Cat Boost model we have already trained. This LLM will utilize the insights of the Cat Boost model to identify and suggest resolutions to issues related to access points. By integrating the strengths of both models, we aim to enhance our ability to troubleshoot and address these problems more effectively and efficiently. Penetration testing and SFTP connection establishment Start with data migration activity, as we’ve created temporary domain for REST and WEB service in VMware setup for penetration testing purpose. Ensure SFTP server connection has been established from backup server by modifying certain firewall and NAT rules. Logic change in monitoring data Instead of sending the monitoring files generated by self-healing services, need to collect the key values from ORC data using spark. Need to only consider the devices which exceeds the threshold more than 50% out of all the data collected in runtime. Reboot Counts Airtel reported for some of the devices the reboot counts on TR and Embedur doesn’t match. So, for some of the devices we will collect the data at regular interval on both sides for further investigation. Detailed Summary: Recommendation Service A recommendation ML service has been successfully dockerized and deployed on the cloud-devel server for testing and demonstration purposes. All required fields for the model have been meticulously defined to ensure accuracy and consistency. A private library, developed in Python, processes the JSON input by transforming it into a 2D labeled array that the model can utilize to generate predictions. The output response from the model is now structured in JSON format, ensuring clarity and ease of interpretation. Additionally, error handling mechanisms have been implemented to manage and report any issues that arise during the prediction process. This service is currently undergoing testing, and active changes and tweaks are being made by the development team. VMware Penetration testing and Kafka Message consumption From existing certs in Huawei Production environment, created temporary domain for REST () and WEB service () for VMware Prod setup. Francheska from Singtel end ensured that all the messages are being consumed from push-notification service and runtime-streaming topics. ELK Stack setup in Singtel VMware Prod Services pertaining to ELK stack setup has been deployed in VMware Production environment. Ensured that the functionality for Elastic search, Log stash, Kibana and file beat works as expected. Band-Steering Created a new analytic group and profile called - "Zyxel_band_steering" with the band steering enabled on RRM configuration. Using REST APIs changed the analytic group of the 1k Zyxel devices. Self-healing weekly report Used the self-healing reports for the past 7 days which contains the devices rebooted at least 10 times a day. Used a python script to fetch the repeated cases over the period of last days. Cloud Upgrade On 30/5/24 Airtel production cloud has been upgraded to 2.10.0. In this release we have done some improvisations on client model. Geo-location service will be deployed once Airtel creates the policy in firewall. Python Package in Production Built and deployed a Docker image to production. The image utilizes a Python 3.8 base and incorporates all essential Python libraries for the application. Fingerprint Report This script automates the collection and consolidation of fingerprint data for a 24-hour period. It gathers data hourly for the first 6 hours and merges them into a single, daily ORC file for further analysis. New instance for Airtel-Astral-USP A new instance with Astral services was spun-up with our Terraform-IaC. USP services were deployed with certs compatible with TR369 device.
Penetration Testing
Penetration Testing on AWS Resources As we are an AWS customer, we are permitted to conduct penetration testing (pentesting) on our own AWS resources, subject to the guidelines and restrictions outlined in the AWS Penetration Testing policy. This provides an overview of the key considerations and steps involved in performing pentesting on your AWS infrastructure. AWS Penetration Testing Policy AWS allows customers to perform security assessments and penetration tests on their own resources hosted within AWS. However, certain conditions and guidelines must be followed: Customers can conduct pentesting on specific services, such as Amazon EC2 instances, NAT Gateways, and load balancers, without prior approval. For certain services, such as AWS DNS (Route 53), AWS Direct Connect, and AWS Shield, customers must obtain prior approval before testing. Ref: AWS Penetration Testing Scope and Limitations When conducting a pentest on your AWS resources, adhere to the following scope and limitations: Limit the testing scope to your own resources only. Do not perform tests that could impact other AWS customers or AWS infrastructure. Avoid tests that may cause degradation of service, Distributed Denial of Service (DDoS), or excessive resource consumption. Ref: AWS Acceptable Use Policy Notification and Approval Process Before initiating a pentest, notify AWS by following these steps: 1. Submit a request through the AWS Vulnerability / Penetration Testing Request Form. 2. Provide details about your planned testing activities, including start and end dates, specific AWS services to be tested, and source IP addresses. 3. AWS will review the request and provide additional guidance or restrictions if necessary. Ref: AWS Penetration Testing Permitted Testing Techniques AWS allows the following testing techniques: Vulnerability scanning Port scanning Manual testing methods However, the following testing techniques are prohibited or restricted: DNS zone walking DoS attacks or simulating DoS attacks Accessing or attempting to access AWS services or infrastructure not part of your own resources Ref: AWS Acceptable Use Policy Third-Party Services and Applications If your AWS infrastructure hosts third-party services or applications, obtain explicit permission from the third-party provider before including them in your pentest scope. Ensure compliance with any additional terms and conditions set by the third-party provider. Ref: AWS Penetration Testing Legal and Compliance Considerations Ensure that your pentesting activities comply with all applicable laws, regulations, and the AWS Acceptable Use Policy. This includes: Respecting intellectual property rights Adhering to privacy laws and data protection regulations Understanding the legal implications and potential risks associated with pentesting Ref: AWS Acceptable Use Policy (Penetration Testing) Reporting and Responsible Disclosure If you discover any vulnerabilities or security issues during your pentest, responsibly disclose them to AWS through the AWS Vulnerability Reporting process. AWS encourages responsible disclosure and appreciates the efforts of security researchers in identifying and reporting potential security vulnerabilities. Black-Box Penetration Testing Test Plan Objective: The goal of this penetration test is to identify and exploit vulnerabilities in the EC2 instance running specific services to assess the security posture. Scope: This test will focus on the external aspects of the EC2 instance without internal access or prior knowledge. 1. Scope of Work (SoW) In-Scope: The EC2 instance hosting the target services. All publicly accessible services and applications running on the instance. Out-of-Scope: Internal AWS infrastructure and configurations. Any other AWS services not explicitly mentioned as in-scope. 2. Methodology Reconnaissance: Gather information about the target instance using passive and active reconnaissance techniques. Tools: WHOIS, DNS enumeration, Shodan, and other open-source intelligence (OSINT) tools. Enumeration: Identify live services and open ports. Tools: Nmap, Masscan. Vulnerability Scanning: Scan for known vulnerabilities in the identified services. Tools: Nessus, OpenVAS, Nikto. Manual Testing: Perform manual testing to identify vulnerabilities that automated tools might miss. Techniques: SQL injection, Cross-Site Scripting (XSS), authentication bypass, etc. Exploitation: Attempt to exploit identified vulnerabilities to assess their impact. Tools: Metasploit, custom scripts. Post-Exploitation: Assess the extent of the compromise and potential data access. Techniques: Privilege escalation, data extraction. 3. Rules of Engagement Allowed Actions: Network scanning and enumeration. Vulnerability scanning and manual testing. Exploitation of identified vulnerabilities. Prohibited Actions: Denial of service (DoS) attacks. Any activity that could cause disruption to the services. Testing Window: Preferred time frames for conducting the tests to minimize operational impact. 4. Estimated Timeline Reconnaissance: 1-2 days. Enumeration: 1-2 days. Vulnerability Scanning: 1-2 days. Manual Testing: 3-5 days. Exploitation: 2-3 days. Reporting: 2-3 days. 5. Reporting Preliminary Report: Initial findings and potential critical vulnerabilities identified during the test. Final Report: Comprehensive report including: Executive summary. Detailed findings with severity ratings. Evidence (with screenshots). * Recommendations.
###Human:tell me about Penetration Testing ###assistant: Penetration Testing on AWS Resources As we are an AWS customer, we are permitted to conduct penetration testing (pentesting) on our own AWS resources, subject to the guidelines and restrictions outlined in the AWS Penetration Testing policy. This provides an overview of the key considerations and steps involved in performing pentesting on your AWS infrastructure. AWS Penetration Testing Policy AWS allows customers to perform security assessments and penetration tests on their own resources hosted within AWS. However, certain conditions and guidelines must be followed: Customers can conduct pentesting on specific services, such as Amazon EC2 instances, NAT Gateways, and load balancers, without prior approval. For certain services, such as AWS DNS (Route 53), AWS Direct Connect, and AWS Shield, customers must obtain prior approval before testing. Ref: AWS Penetration Testing Scope and Limitations When conducting a pentest on your AWS resources, adhere to the following scope and limitations: Limit the testing scope to your own resources only. Do not perform tests that could impact other AWS customers or AWS infrastructure. Avoid tests that may cause degradation of service, Distributed Denial of Service (DDoS), or excessive resource consumption. Ref: AWS Acceptable Use Policy Notification and Approval Process Before initiating a pentest, notify AWS by following these steps: 1. Submit a request through the AWS Vulnerability / Penetration Testing Request Form. 2. Provide details about your planned testing activities, including start and end dates, specific AWS services to be tested, and source IP addresses. 3. AWS will review the request and provide additional guidance or restrictions if necessary. Ref: AWS Penetration Testing Permitted Testing Techniques AWS allows the following testing techniques: Vulnerability scanning Port scanning Manual testing methods However, the following testing techniques are prohibited or restricted: DNS zone walking DoS attacks or simulating DoS attacks Accessing or attempting to access AWS services or infrastructure not part of your own resources Ref: AWS Acceptable Use Policy Third-Party Services and Applications If your AWS infrastructure hosts third-party services or applications, obtain explicit permission from the third-party provider before including them in your pentest scope. Ensure compliance with any additional terms and conditions set by the third-party provider. Ref: AWS Penetration Testing Legal and Compliance Considerations Ensure that your pentesting activities comply with all applicable laws, regulations, and the AWS Acceptable Use Policy. This includes: Respecting intellectual property rights Adhering to privacy laws and data protection regulations Understanding the legal implications and potential risks associated with pentesting Ref: AWS Acceptable Use Policy (Penetration Testing) Reporting and Responsible Disclosure If you discover any vulnerabilities or security issues during your pentest, responsibly disclose them to AWS through the AWS Vulnerability Reporting process. AWS encourages responsible disclosure and appreciates the efforts of security researchers in identifying and reporting potential security vulnerabilities. Black-Box Penetration Testing Test Plan Objective: The goal of this penetration test is to identify and exploit vulnerabilities in the EC2 instance running specific services to assess the security posture. Scope: This test will focus on the external aspects of the EC2 instance without internal access or prior knowledge. 1. Scope of Work (SoW) In-Scope: The EC2 instance hosting the target services. All publicly accessible services and applications running on the instance. Out-of-Scope: Internal AWS infrastructure and configurations. Any other AWS services not explicitly mentioned as in-scope. 2. Methodology Reconnaissance: Gather information about the target instance using passive and active reconnaissance techniques. Tools: WHOIS, DNS enumeration, Shodan, and other open-source intelligence (OSINT) tools. Enumeration: Identify live services and open ports. Tools: Nmap, Masscan. Vulnerability Scanning: Scan for known vulnerabilities in the identified services. Tools: Nessus, OpenVAS, Nikto. Manual Testing: Perform manual testing to identify vulnerabilities that automated tools might miss. Techniques: SQL injection, Cross-Site Scripting (XSS), authentication bypass, etc. Exploitation: Attempt to exploit identified vulnerabilities to assess their impact. Tools: Metasploit, custom scripts. Post-Exploitation: Assess the extent of the compromise and potential data access. Techniques: Privilege escalation, data extraction. 3. Rules of Engagement Allowed Actions: Network scanning and enumeration. Vulnerability scanning and manual testing. Exploitation of identified vulnerabilities. Prohibited Actions: Denial of service (DoS) attacks. Any activity that could cause disruption to the services. Testing Window: Preferred time frames for conducting the tests to minimize operational impact. 4. Estimated Timeline Reconnaissance: 1-2 days. Enumeration: 1-2 days. Vulnerability Scanning: 1-2 days. Manual Testing: 3-5 days. Exploitation: 2-3 days. Reporting: 2-3 days. 5. Reporting Preliminary Report: Initial findings and potential critical vulnerabilities identified during the test. Final Report: Comprehensive report including: Executive summary. Detailed findings with severity ratings. Evidence (with screenshots). * Recommendations.
Infrastructure Design
Model Zoo Internal Prod Architecture
###Human:tell me about Infrastructure Design ###assistant: Model Zoo Internal Prod Architecture
Models List
Model Domain Status Supported Devices Qualifiers Gitlab Link Shufflenet Computer Vision Done Laptop, Raspberry Pi Workload: 100.6m(J)nRelease Date: 16 December 2018nTarget usecases: Classification, Image ClassificationnModel Size: 2.6M(tflite) Squeezenet Computer Vision Done Laptop, Raspberry Pi Workload: 146.4mnRelease Date: February 22, 2016 nTarget usecases: Image ClassificationnModel Size: 5.0M Alexnet Computer Vision Done Laptop, Raspberry Pi Workload: 261.0mnRelease Date: May 22, 2022nTarget usecases: nClassification,nImage ClassificationnModel Size: 61.2M Mediapipe-pose-estimation Computer Vision Done Laptop, Raspberry Pi Workload: 329.8mnRelease Date: 05 May 2023nTarget usecases:nPose Estimation nModel Size: 3.3M Mediapipe-image-segmentation Computer Vision Done Laptop, Raspberry Pi Workload: 255.6mnRelease Date: 01 June 2023 nTarget usecases:nPerson Segmentation, Object SegmentationnModel Size: 2.8M Inception-v3 Computer Vision Done Laptop, Raspberry Pi Workload: 1.4nRelease Date: 2015nTarget usecases:nPerson Classification, Image RecognitionnModel Size: 24.3M yolo-v5-object-detection Computer Vision Done Laptop, Raspberry Pi Workload: 1.3nRelease Date: June 29, 2020nTarget usecases:nDetection, Object DetectionnModel Size: 10.9 yolo-v7-pose-estimation Computer Vision Done Laptop, Raspberry Pi yolact-image-segmentation Computer Vision Done Laptop, Raspberry Pi, SR110 (MCU) Worload: 736.5mnRelease Date: December 16th, 2019nTarget usecases:nSegmentation, Object SegmentationnModel Size: 1.7M yolo-v8-object-detection Computer Vision Done Laptop, Raspberry Pi Worload: 7.0mnRelease Date: January 2023nTarget usecases:nPerson Detection, Object DetectionnModel Size: 12.9M yolo-v8-person-segmentation Computer Vision Done Laptop, Raspberry Pi Release Date: January 2023nTarget usecases:nSegmentation, Image SegmentationnModel Size: 13.9M yolo-v8-pose-estimation Computer Vision Done Laptop, Raspberry Pi Workload: 1.6nRelease Date: Sep 16, 2023nTarget usecases:nSegmentation, Image SegmentationnModel Size: 13.5M Resnet-50 Computer Vision Done Laptop, Raspberry Pi Mobilenet-v3 Computer Vision Done Laptop, Raspberry Pi Swin-tiny Computer Vision To-Do Laptop, Raspberry Pi tiny - DETR Computer Vision To-Do Laptop, Raspberry Pi Apple OpenELM 270M / 450M Text Based - Language To - Do Laptop, Raspberry Pi WAV2VEC2_ASR Model Speech Recognition In Progress Laptop, Raspberry Pi Custom Trained Speech Recognition Model (Using Keras Sequential) Speech Recognition Done Laptop, Raspberry Pi, Arduino Nano 33 BLE yolov8 Face Detection Computer Vision To-Do Laptop, Raspberry Pi Whisper (OpenAI Speech Recognition model) Computer Vision In Progress Laptop, Raspberry Pi GoogLeNet Computer Vision To-Do Laptop, Raspberry Pi MNASNet05 Computer Vision Done Laptop, Raspberry Pi MobileNet-V2 Computer Vision Done Laptop, Raspberry Pi MobileNet-v3-Large Computer Vision Done Laptop, Raspberry Pi Workload: 171.3mnRelease Date: 2019nTarget usecases:nImage ClassifiactionnModel Size: 21.9M MobileNet-v3-Small Computer Vision Done Laptop, Raspberry Pi Workload: 86.3mnRelease Date: 2019nTarget usecases:nImage ClassifiactionnModel Size: 2.9M ResNet50 Computer Vision Done Laptop, Raspberry Pi Workload: 794.5mnRelease Date: 2015nTarget usecases:nImage ClassifiactionnModel Size: 26.3M ResNet101 Computer Vision Done Laptop, Raspberry Pi Workload: 1.4nRelease Date: 2015nTarget usecases:nImage ClassifiactionnModel Size: 46.0M MediaPipe-Selfie-Segmentation Computer Vision To-Do Laptop, Raspberry Pi FastSam-S Computer Vision To-Do Laptop, Raspberry Pi DeepLabV3-ResNet50 Computer Vision To-Do Laptop, Raspberry Pi YoloV10 Image Segmentation Computer Vision To-Do Laptop,nRaspberry PI
###Human:tell me about Models List ###assistant: Model Domain Status Supported Devices Qualifiers Gitlab Link Shufflenet Computer Vision Done Laptop, Raspberry Pi Workload: 100.6m(J)nRelease Date: 16 December 2018nTarget usecases: Classification, Image ClassificationnModel Size: 2.6M(tflite) Squeezenet Computer Vision Done Laptop, Raspberry Pi Workload: 146.4mnRelease Date: February 22, 2016 nTarget usecases: Image ClassificationnModel Size: 5.0M Alexnet Computer Vision Done Laptop, Raspberry Pi Workload: 261.0mnRelease Date: May 22, 2022nTarget usecases: nClassification,nImage ClassificationnModel Size: 61.2M Mediapipe-pose-estimation Computer Vision Done Laptop, Raspberry Pi Workload: 329.8mnRelease Date: 05 May 2023nTarget usecases:nPose Estimation nModel Size: 3.3M Mediapipe-image-segmentation Computer Vision Done Laptop, Raspberry Pi Workload: 255.6mnRelease Date: 01 June 2023 nTarget usecases:nPerson Segmentation, Object SegmentationnModel Size: 2.8M Inception-v3 Computer Vision Done Laptop, Raspberry Pi Workload: 1.4nRelease Date: 2015nTarget usecases:nPerson Classification, Image RecognitionnModel Size: 24.3M yolo-v5-object-detection Computer Vision Done Laptop, Raspberry Pi Workload: 1.3nRelease Date: June 29, 2020nTarget usecases:nDetection, Object DetectionnModel Size: 10.9 yolo-v7-pose-estimation Computer Vision Done Laptop, Raspberry Pi yolact-image-segmentation Computer Vision Done Laptop, Raspberry Pi, SR110 (MCU) Worload: 736.5mnRelease Date: December 16th, 2019nTarget usecases:nSegmentation, Object SegmentationnModel Size: 1.7M yolo-v8-object-detection Computer Vision Done Laptop, Raspberry Pi Worload: 7.0mnRelease Date: January 2023nTarget usecases:nPerson Detection, Object DetectionnModel Size: 12.9M yolo-v8-person-segmentation Computer Vision Done Laptop, Raspberry Pi Release Date: January 2023nTarget usecases:nSegmentation, Image SegmentationnModel Size: 13.9M yolo-v8-pose-estimation Computer Vision Done Laptop, Raspberry Pi Workload: 1.6nRelease Date: Sep 16, 2023nTarget usecases:nSegmentation, Image SegmentationnModel Size: 13.5M Resnet-50 Computer Vision Done Laptop, Raspberry Pi Mobilenet-v3 Computer Vision Done Laptop, Raspberry Pi Swin-tiny Computer Vision To-Do Laptop, Raspberry Pi tiny - DETR Computer Vision To-Do Laptop, Raspberry Pi Apple OpenELM 270M / 450M Text Based - Language To - Do Laptop, Raspberry Pi WAV2VEC2_ASR Model Speech Recognition In Progress Laptop, Raspberry Pi Custom Trained Speech Recognition Model (Using Keras Sequential) Speech Recognition Done Laptop, Raspberry Pi, Arduino Nano 33 BLE yolov8 Face Detection Computer Vision To-Do Laptop, Raspberry Pi Whisper (OpenAI Speech Recognition model) Computer Vision In Progress Laptop, Raspberry Pi GoogLeNet Computer Vision To-Do Laptop, Raspberry Pi MNASNet05 Computer Vision Done Laptop, Raspberry Pi MobileNet-V2 Computer Vision Done Laptop, Raspberry Pi MobileNet-v3-Large Computer Vision Done Laptop, Raspberry Pi Workload: 171.3mnRelease Date: 2019nTarget usecases:nImage ClassifiactionnModel Size: 21.9M MobileNet-v3-Small Computer Vision Done Laptop, Raspberry Pi Workload: 86.3mnRelease Date: 2019nTarget usecases:nImage ClassifiactionnModel Size: 2.9M ResNet50 Computer Vision Done Laptop, Raspberry Pi Workload: 794.5mnRelease Date: 2015nTarget usecases:nImage ClassifiactionnModel Size: 26.3M ResNet101 Computer Vision Done Laptop, Raspberry Pi Workload: 1.4nRelease Date: 2015nTarget usecases:nImage ClassifiactionnModel Size: 46.0M MediaPipe-Selfie-Segmentation Computer Vision To-Do Laptop, Raspberry Pi FastSam-S Computer Vision To-Do Laptop, Raspberry Pi DeepLabV3-ResNet50 Computer Vision To-Do Laptop, Raspberry Pi YoloV10 Image Segmentation Computer Vision To-Do Laptop,nRaspberry PI
Recommendation service Design
Initial flow design (Will be updated) Monitor mode flow Added Kafka plugin on APISIX that will route /api/ endpoint to the kafka topic . Sample message publish on this topic Now monitoring service will listen on this topic and filter only the endpoints. The monitoring for this device will configured by the ENV and it set in seconds. Once the monitoring start the information was stored in Redis with prefix . Sample Redis data key - value - Now it will send the sendUpdate to device for the monitoring period. Once the monitoring was completed we will send the response to the kafka topic the ack was send only the ENV is enabled Sample response If any message occurs on the same device then we reject that message. Recommendation service Kafka flow This service will listen on the Kafka topic once the data presents then it will fetch the data from the table with parameters customer and type we received on the message. List of fields we fetch from the table for different types Accesspoint radio_24 radio_5 clients for the client type we’ll receive more than one values so that we calculate average for those values and make it as one. once all data fetched we form a JSON and hit the recommendation ML API to get recommended response . once the response received we update the data in Redis. sample response saved data Recommendation service ML API flow we have a recommendation AI model service to get the recommended response for the data we give Endpoint Headers Payload Response status code 200 Recommendation service API flow this service API is used to get the data from the Redis first it will check the device is present in Redis or not if it presents then we get the recommendation response and send it. if it not exists then it check the device is exist in cloud or not if its then get data from the table and do the same process mentioned in the service Kafka flow Endpoint Headers n Body Response status code 200 - OKn400 - Bad Request
###Human:tell me about Recommendation service Design ###assistant: Initial flow design (Will be updated) Monitor mode flow Added Kafka plugin on APISIX that will route /api/ endpoint to the kafka topic . Sample message publish on this topic Now monitoring service will listen on this topic and filter only the endpoints. The monitoring for this device will configured by the ENV and it set in seconds. Once the monitoring start the information was stored in Redis with prefix . Sample Redis data key - value - Now it will send the sendUpdate to device for the monitoring period. Once the monitoring was completed we will send the response to the kafka topic the ack was send only the ENV is enabled Sample response If any message occurs on the same device then we reject that message. Recommendation service Kafka flow This service will listen on the Kafka topic once the data presents then it will fetch the data from the table with parameters customer and type we received on the message. List of fields we fetch from the table for different types Accesspoint radio_24 radio_5 clients for the client type we’ll receive more than one values so that we calculate average for those values and make it as one. once all data fetched we form a JSON and hit the recommendation ML API to get recommended response . once the response received we update the data in Redis. sample response saved data Recommendation service ML API flow we have a recommendation AI model service to get the recommended response for the data we give Endpoint Headers Payload Response status code 200 Recommendation service API flow this service API is used to get the data from the Redis first it will check the device is present in Redis or not if it presents then we get the recommendation response and send it. if it not exists then it check the device is exist in cloud or not if its then get data from the table and do the same process mentioned in the service Kafka flow Endpoint Headers n Body Response status code 200 - OKn400 - Bad Request
API changes based on New Schema
API design LANDING PAGE 1.Categories Listing API Explore Page 2.Filter Listing API 3.Filter API based on domain Models Listing 4.Fetch All models: 5.Filter models based on domain: 6.Filter models based on platform: 7.Filter models based on platform and domain: 8.Filter models based on category: 9.Filter models based on category, domain: 10.Filter models based on category, domain, platform: Model Details Page Get Metrics (usecase/models): Get Model Usecase Details 11.Insert generic models: 12.Insert usecase models:
###Human:tell me about API changes based on New Schema ###assistant: API design LANDING PAGE 1.Categories Listing API Explore Page 2.Filter Listing API 3.Filter API based on domain Models Listing 4.Fetch All models: 5.Filter models based on domain: 6.Filter models based on platform: 7.Filter models based on platform and domain: 8.Filter models based on category: 9.Filter models based on category, domain: 10.Filter models based on category, domain, platform: Model Details Page Get Metrics (usecase/models): Get Model Usecase Details 11.Insert generic models: 12.Insert usecase models:
Status Update(27th - 31st)
Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Vimal: !!!! Vishnu: !!!! Ganapathy: !!!! Thangavel: !!!! Jyotsna: !!!! Shri Ramki: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [VIMAL] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [VISHNU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [VIMAL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] AIR-3651: Create DB/redis/Kafka connection & implement business logic for recommendation service – ==In Progress.== added the recommendation ML API to get the response. saved the response in the Redis cache. handled the recommendation API to get the recommendation ML response. [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) ON HOLD: [VIMAL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SANJAY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==On Hold.== * !!Brief Summary of the ticket in 2-3 points!!
###Human:tell me about Status Update(27th - 31st) ###assistant: Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Vimal: !!!! Vishnu: !!!! Ganapathy: !!!! Thangavel: !!!! Jyotsna: !!!! Shri Ramki: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [VIMAL] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [VISHNU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [VIMAL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] AIR-3651: Create DB/redis/Kafka connection & implement business logic for recommendation service – ==In Progress.== added the recommendation ML API to get the response. saved the response in the Redis cache. handled the recommendation API to get the recommendation ML response. [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) ON HOLD: [VIMAL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SANJAY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==On Hold.== * !!Brief Summary of the ticket in 2-3 points!!
User Management
To implement the user management module for your model-zoo web app, you'll need both server-side and client-side code. Here's an overview of the steps involved: Server-side: 1. Set up Amazon Cognito User Pool: Create a new User Pool in the AWS Cognito console. Configure the User Pool settings according to your requirements (e.g., password policies, MFA, etc.). Create two groups: "general-users" and "ms-users". 2. Set up Express.js server: Install the required dependencies: , , and any other necessary libraries. Initialize the AWS SDK with your credentials and configure the Cognito service. 3. Implement sign-up and login routes: Create a route for sign-up that accepts username, email, and password from the client. Use the method from the AWS SDK to create a new user in the User Pool and add them to the "general-users" group. Create a route for login that accepts username and password from the client. Use the method from the AWS SDK to authenticate the user and generate session tokens. 4. Implement Microsoft OAuth2.0 routes: Create a route to handle the Microsoft OAuth2.0 redirect URI. Use a library like or to retrieve the access token from the authorized code. Use the Microsoft Graph API to fetch the user's profile information (username, email). Use the method from the AWS SDK to create a new user in the User Pool and add them to the "ms-users" group. Client-side (assuming you're using a modern JavaScript framework like React): 1. Set up sign-up and login forms: Create a sign-up form that collects username, email, and password. Create a login form that collects username and password. Use or a library like to send the form data to your server-side sign-up and login routes. 2. Implement Microsoft OAuth2.0 flow: Use a library like or to initiate the Microsoft OAuth2.0 flow. Handle the redirect URI on the client-side and retrieve the authorized code. Send the authorized code to your server-side Microsoft OAuth2.0 route to complete the sign-up/login process. 3. Handle user session: Store the session tokens (ID token, access token, refresh token) received from the server in the client-side storage (e.g., localStorage or cookies). * Use the session tokens to authenticate subsequent requests to your server-side API routes. Here's an example of how to retrieve user information from the Microsoft authorized code using the library on the server-side: Note: You'll need to replace , , and with your actual Microsoft OAuth2.0 application credentials and redirect URI. This is a high-level overview of the implementation. You'll need to fill in the details and handle additional cases (e.g., error handling, token refresh, etc.) according to your specific requirements.
###Human:tell me about User Management ###assistant: To implement the user management module for your model-zoo web app, you'll need both server-side and client-side code. Here's an overview of the steps involved: Server-side: 1. Set up Amazon Cognito User Pool: Create a new User Pool in the AWS Cognito console. Configure the User Pool settings according to your requirements (e.g., password policies, MFA, etc.). Create two groups: "general-users" and "ms-users". 2. Set up Express.js server: Install the required dependencies: , , and any other necessary libraries. Initialize the AWS SDK with your credentials and configure the Cognito service. 3. Implement sign-up and login routes: Create a route for sign-up that accepts username, email, and password from the client. Use the method from the AWS SDK to create a new user in the User Pool and add them to the "general-users" group. Create a route for login that accepts username and password from the client. Use the method from the AWS SDK to authenticate the user and generate session tokens. 4. Implement Microsoft OAuth2.0 routes: Create a route to handle the Microsoft OAuth2.0 redirect URI. Use a library like or to retrieve the access token from the authorized code. Use the Microsoft Graph API to fetch the user's profile information (username, email). Use the method from the AWS SDK to create a new user in the User Pool and add them to the "ms-users" group. Client-side (assuming you're using a modern JavaScript framework like React): 1. Set up sign-up and login forms: Create a sign-up form that collects username, email, and password. Create a login form that collects username and password. Use or a library like to send the form data to your server-side sign-up and login routes. 2. Implement Microsoft OAuth2.0 flow: Use a library like or to initiate the Microsoft OAuth2.0 flow. Handle the redirect URI on the client-side and retrieve the authorized code. Send the authorized code to your server-side Microsoft OAuth2.0 route to complete the sign-up/login process. 3. Handle user session: Store the session tokens (ID token, access token, refresh token) received from the server in the client-side storage (e.g., localStorage or cookies). * Use the session tokens to authenticate subsequent requests to your server-side API routes. Here's an example of how to retrieve user information from the Microsoft authorized code using the library on the server-side: Note: You'll need to replace , , and with your actual Microsoft OAuth2.0 application credentials and redirect URI. This is a high-level overview of the implementation. You'll need to fill in the details and handle additional cases (e.g., error handling, token refresh, etc.) according to your specific requirements.
Feature Audit
Self Healing Parameters that are un supported WAN Down FXS Port Down Parameters that are main ap only CPU Memory Temperature Uptime Parameters supported by all Coverage Hole Score Current Channel Interference Client Steering Steering is possible, but not with all of the same configurations. We will want to evaluate whether we want to bridge the existing Astral config to the TR-181 data model, or prioritize this as a new feature DFS Cannot control the DFS in the same way Mesh Formation Cannot configure the Mesh in the same way as Astral. Relies on EasyMesh
###Human:tell me about Feature Audit ###assistant: Self Healing Parameters that are un supported WAN Down FXS Port Down Parameters that are main ap only CPU Memory Temperature Uptime Parameters supported by all Coverage Hole Score Current Channel Interference Client Steering Steering is possible, but not with all of the same configurations. We will want to evaluate whether we want to bridge the existing Astral config to the TR-181 data model, or prioritize this as a new feature DFS Cannot control the DFS in the same way Mesh Formation Cannot configure the Mesh in the same way as Astral. Relies on EasyMesh
Model Artifacts
General Models :::info The below artifacts will remain common across all the general/pretrained models ::: The following are the artifacts for any of the general usecase models. Model Image / Video Video / GIF in model details page - Showcase of how the model works Image thumbnail for the model in search page Model available for download in any/all of the available formats pt *tflite h5 zip synap(for Synaptics models) onnx Different format parameters and other details - *after base version Github link of the general implementation of the model Research paper link of from where the model has been adapted. *Dataset link Model Summary Metrics Inference speed Preprocessing speed Post Processing Speed Model Loading time Input and Output samples - input / output layer size / parameters - after base version Sample Input Sample output Detailed layerwise metrics Internal Gitlab repo with sample.py and info about the models Sample code in website - *after base version Steps on how to use this pretrained model for prediction , validation and visualization Steps on how to install the CLI tool. Commands to use the CLI tool to perform prediction and validation. Complete Readme Link of how to use the CLI tool for other operations. Tags - Use cases / Features / Platforms / Category - Similar models Model visualization in UI - feasibility check Usecases / Demos The following are the artifacts for any of the general usecase models. Model Image / Video - Showcase of how the model works Model available for download in any/all of the available formats pt tflite h5 zip synap(for Synaptics models) onnx Public Github link for the model’s code with training and inference code. Research paper of backbone model Model Summary Metrics Inference speed Preprocessing speed Post Processing Speed Model Loading time Detailed layerwise metrics Steps on how to use the model to perform custom training , prediction and validation Steps on how to train the model using the code repo. Commands to use the CLI tool / code repo to perform prediction and validation. Complete Readme Link of how the model works and the design aspects of the model. Similar use cases How it can achieved Description Requirements Sensors Other components - input / output Software Sample application / code Any tuning / optimizations done Synaptics Models Artifacts Convert the model into synap format. Pass the model with the usecase to the mlcommons script to generate the metrics Layerwise metrics is not getting generated from the ml commons benchmarking Can use the mltk profiler to generate the
###Human:tell me about Model Artifacts ###assistant: General Models :::info The below artifacts will remain common across all the general/pretrained models ::: The following are the artifacts for any of the general usecase models. Model Image / Video Video / GIF in model details page - Showcase of how the model works Image thumbnail for the model in search page Model available for download in any/all of the available formats pt *tflite h5 zip synap(for Synaptics models) onnx Different format parameters and other details - *after base version Github link of the general implementation of the model Research paper link of from where the model has been adapted. *Dataset link Model Summary Metrics Inference speed Preprocessing speed Post Processing Speed Model Loading time Input and Output samples - input / output layer size / parameters - after base version Sample Input Sample output Detailed layerwise metrics Internal Gitlab repo with sample.py and info about the models Sample code in website - *after base version Steps on how to use this pretrained model for prediction , validation and visualization Steps on how to install the CLI tool. Commands to use the CLI tool to perform prediction and validation. Complete Readme Link of how to use the CLI tool for other operations. Tags - Use cases / Features / Platforms / Category - Similar models Model visualization in UI - feasibility check Usecases / Demos The following are the artifacts for any of the general usecase models. Model Image / Video - Showcase of how the model works Model available for download in any/all of the available formats pt tflite h5 zip synap(for Synaptics models) onnx Public Github link for the model’s code with training and inference code. Research paper of backbone model Model Summary Metrics Inference speed Preprocessing speed Post Processing Speed Model Loading time Detailed layerwise metrics Steps on how to use the model to perform custom training , prediction and validation Steps on how to train the model using the code repo. Commands to use the CLI tool / code repo to perform prediction and validation. Complete Readme Link of how the model works and the design aspects of the model. Similar use cases How it can achieved Description Requirements Sensors Other components - input / output Software Sample application / code Any tuning / optimizations done Synaptics Models Artifacts Convert the model into synap format. Pass the model with the usecase to the mlcommons script to generate the metrics Layerwise metrics is not getting generated from the ml commons benchmarking Can use the mltk profiler to generate the
MLTK Vs Pytorch Profiler
Particulars Silabs-MLTK PyTorch Profiler Models Support Supports all models Only vision based models Formats Support Can profile for tflite models and models built using mltk Not format specific and cannot generate format specific metrics Device Support Profiles in simulator which is based on ARM Cortex M33 . Doesnot get installed in RPI due to compatibilty issues Supports RPI and local cpu systems Metrics Support Provides detailed layer wise metrics and has all the highlight metrics planned for displaying in the website (engery , model size , inference speed etc) Provides only layer wise metrics which is compatible only with CPU and doesnt provide any of the highlight metrics planned Performance speed Highly efficient and fast in generating metrics for all the layers and the general metrics. Takes more time to generate metrics for a model when compared to mltk. Sample Pytorch Metrics Sample MLTK Metrics Profilers and Metrics: Profiler Metrics MLTK General highlight metrics , energy , infernces per second and layer level metrics Pytorch Profiler Layer level metrics , CPU metrics Synaptics
###Human:tell me about MLTK Vs Pytorch Profiler ###assistant: Particulars Silabs-MLTK PyTorch Profiler Models Support Supports all models Only vision based models Formats Support Can profile for tflite models and models built using mltk Not format specific and cannot generate format specific metrics Device Support Profiles in simulator which is based on ARM Cortex M33 . Doesnot get installed in RPI due to compatibilty issues Supports RPI and local cpu systems Metrics Support Provides detailed layer wise metrics and has all the highlight metrics planned for displaying in the website (engery , model size , inference speed etc) Provides only layer wise metrics which is compatible only with CPU and doesnt provide any of the highlight metrics planned Performance speed Highly efficient and fast in generating metrics for all the layers and the general metrics. Takes more time to generate metrics for a model when compared to mltk. Sample Pytorch Metrics Sample MLTK Metrics Profilers and Metrics: Profiler Metrics MLTK General highlight metrics , energy , infernces per second and layer level metrics Pytorch Profiler Layer level metrics , CPU metrics Synaptics
Model Metrics detailed
Sample Layer wise metrics table for Squeezenet model Layer Ops (M) MaCs(M) CPU_Cycles(M) Energy(m) Time(m) conv_2d 21.3 166.8 34.5 2.1 0 concatenation 0 0 0 0 0 conv_2d 12.5 6.2 13.2 2.6 169.4 max_pool_2d 839.8K 0 0 0 0 conv_2d 28.1 14.0 21.9 4.4 281.4 Summary Labels of the profiling metrics Layer wise metrics Synaptics Model Metrics:
###Human:tell me about Model Metrics detailed ###assistant: Sample Layer wise metrics table for Squeezenet model Layer Ops (M) MaCs(M) CPU_Cycles(M) Energy(m) Time(m) conv_2d 21.3 166.8 34.5 2.1 0 concatenation 0 0 0 0 0 conv_2d 12.5 6.2 13.2 2.6 169.4 max_pool_2d 839.8K 0 0 0 0 conv_2d 28.1 14.0 21.9 4.4 281.4 Summary Labels of the profiling metrics Layer wise metrics Synaptics Model Metrics:
Model Page Architecture
Landing page In our landing page's implementation, we've incorporated a static caching route to optimize performance. Here's a breakdown of how it operates: During the build process, we've configured Next.js to statically cache the data-fetching process. This means that when the page is built, the fetched data is stored in the cache. Consequently, when a user requests the landing page, the data is retrieved directly from this cache, bypassing the need to fetch it from the server again. This setup significantly enhances the page's loading speed, as users receive cached data instantaneously rather than waiting for a server response. It not only boosts performance but also reduces server load, ensuring a smoother browsing experience for our visitors. In essence, by implementing this static caching route, we've streamlined the data retrieval process, optimizing the landing page's performance and improving user satisfaction.n Model List Page with Search feature. Our model page with a search feature is engineered to strike a balance between static caching for improved performance and dynamic responses for up-to-date search results. Here's how it operates: Upon initial build, the model page is cached, ensuring swift loading times for visitors accessing it. This pre-cached version serves as the default response for subsequent visits to the same URL, providing a seamless user experience. However, to accommodate dynamic search queries, we've implemented a mechanism that triggers a revalidation process every 3600 milliseconds, or one hour. When a user initiates a search, this process ensures that the search API is queried dynamically, fetching fresh data from server for new API request that was not stored memorized and serving it to the user in real-time. This approach offers the best of both worlds: cached data for repeated visits to the same URL, optimizing performance, and dynamic responses for search queries, ensuring that users always receive the latest information. Model Details page n We've identified a subset of model details pages that are frequently accessed by users. To optimize the loading speed and reduce server load for these pages, we've implemented dynamic route caching. This involves caching the HTML output of these pages during the build process using Next.js's static site generation capabilities. n For model details pages that are not as frequently accessed or may have more dynamic content, we've adopted a revalidation approach. With this method, the data for these pages is fetched from the server when requested, but Next.js automatically revalidates and updates the cached version at specified intervals using the revalidate path method Miscellaneous Optimization Methods Code Splitting and Suspense Component Implementing advanced techniques like Suspense and code splitting to make the impact of data loading on perceived performance. When a component relies on fetching data, we leverage Suspense to render a placeholder or "skimmer" UI element, providing users with visual feedback that content is loading. Concurrently, we employ code splitting to dynamically load server response data, ensuring that the initial page load remains fast and responsive.
###Human:tell me about Model Page Architecture ###assistant: Landing page In our landing page's implementation, we've incorporated a static caching route to optimize performance. Here's a breakdown of how it operates: During the build process, we've configured Next.js to statically cache the data-fetching process. This means that when the page is built, the fetched data is stored in the cache. Consequently, when a user requests the landing page, the data is retrieved directly from this cache, bypassing the need to fetch it from the server again. This setup significantly enhances the page's loading speed, as users receive cached data instantaneously rather than waiting for a server response. It not only boosts performance but also reduces server load, ensuring a smoother browsing experience for our visitors. In essence, by implementing this static caching route, we've streamlined the data retrieval process, optimizing the landing page's performance and improving user satisfaction.n Model List Page with Search feature. Our model page with a search feature is engineered to strike a balance between static caching for improved performance and dynamic responses for up-to-date search results. Here's how it operates: Upon initial build, the model page is cached, ensuring swift loading times for visitors accessing it. This pre-cached version serves as the default response for subsequent visits to the same URL, providing a seamless user experience. However, to accommodate dynamic search queries, we've implemented a mechanism that triggers a revalidation process every 3600 milliseconds, or one hour. When a user initiates a search, this process ensures that the search API is queried dynamically, fetching fresh data from server for new API request that was not stored memorized and serving it to the user in real-time. This approach offers the best of both worlds: cached data for repeated visits to the same URL, optimizing performance, and dynamic responses for search queries, ensuring that users always receive the latest information. Model Details page n We've identified a subset of model details pages that are frequently accessed by users. To optimize the loading speed and reduce server load for these pages, we've implemented dynamic route caching. This involves caching the HTML output of these pages during the build process using Next.js's static site generation capabilities. n For model details pages that are not as frequently accessed or may have more dynamic content, we've adopted a revalidation approach. With this method, the data for these pages is fetched from the server when requested, but Next.js automatically revalidates and updates the cached version at specified intervals using the revalidate path method Miscellaneous Optimization Methods Code Splitting and Suspense Component Implementing advanced techniques like Suspense and code splitting to make the impact of data loading on perceived performance. When a component relies on fetching data, we leverage Suspense to render a placeholder or "skimmer" UI element, providing users with visual feedback that content is loading. Concurrently, we employ code splitting to dynamically load server response data, ensuring that the initial page load remains fast and responsive.
Get Work Hour Flow
Get work hours Query Param Values approved true false 0 1 type work meeting other List work hours of logged in user 1. to see recent history of work hours 2. to see pending approvals Logic validate filters get latest entries (48 hours) from work table under the logged in employee apply filters based on the query for all the entries, try to get approvals from approvals table (check if SQL join is possible) add approvals data to the list of work hour entries Get work hours of team members Query Param Values approved true false 0 1 type work meeting other List work hours of team-members (only applies to managers) of the logged in user 1. to see all the pending approvals 2. to see approved work hours Logic validate filters get all employees whose manager is logged in employee user get latest (last 48 hour) work hour entries of those users apply filters based on the query try to get all approvals for the work hour entries * add approval data to the list of work hour entries
###Human:tell me about Get Work Hour Flow ###assistant: Get work hours Query Param Values approved true false 0 1 type work meeting other List work hours of logged in user 1. to see recent history of work hours 2. to see pending approvals Logic validate filters get latest entries (48 hours) from work table under the logged in employee apply filters based on the query for all the entries, try to get approvals from approvals table (check if SQL join is possible) add approvals data to the list of work hour entries Get work hours of team members Query Param Values approved true false 0 1 type work meeting other List work hours of team-members (only applies to managers) of the logged in user 1. to see all the pending approvals 2. to see approved work hours Logic validate filters get all employees whose manager is logged in employee user get latest (last 48 hour) work hour entries of those users apply filters based on the query try to get all approvals for the work hour entries * add approval data to the list of work hour entries
Work Hour Approval Flow
Listing pending approvals can be done from get work hours API Approving Work hour Approve list of work hour entries (only applies to manager) validate adds entry to approvals table * set approved flag to true to specified work hour entries and update the same in db DB structure Approval Column Name Type Constraints id int approved_by int ForeignKey approved_at int timestamp
###Human:tell me about Work Hour Approval Flow ###assistant: Listing pending approvals can be done from get work hours API Approving Work hour Approve list of work hour entries (only applies to manager) validate adds entry to approvals table * set approved flag to true to specified work hour entries and update the same in db DB structure Approval Column Name Type Constraints id int approved_by int ForeignKey approved_at int timestamp
Add Work Hour Flow
UI wireframe: API: Adds work hour entry Validate Add entry to work table Adds meeting entry Validate Add entry to meeting table If not recurrent, add the same entry to the work table else, a worker running as cronjob will add this meeting as a work for recurrence days Validations 1. Basic check for empty values 2. from < to 3. from < 24 hours of current time 4. recurrence_days values should be 0 to 6 Database structure: Work Column Name Column Type Constraints id int title string description string employee_id int ForeignKey from int timestamp to int timestamp type int work other references []string default: [] is_approved bool default: false Meeting Column Name Column Type Constraints id int title string description string employee_id int ForeignKey from int timestamp to int timestamp recurrence_days []int default: [] and values: 0-6 is_approved bool default: false
###Human:tell me about Add Work Hour Flow ###assistant: UI wireframe: API: Adds work hour entry Validate Add entry to work table Adds meeting entry Validate Add entry to meeting table If not recurrent, add the same entry to the work table else, a worker running as cronjob will add this meeting as a work for recurrence days Validations 1. Basic check for empty values 2. from < to 3. from < 24 hours of current time 4. recurrence_days values should be 0 to 6 Database structure: Work Column Name Column Type Constraints id int title string description string employee_id int ForeignKey from int timestamp to int timestamp type int work other references []string default: [] is_approved bool default: false Meeting Column Name Column Type Constraints id int title string description string employee_id int ForeignKey from int timestamp to int timestamp recurrence_days []int default: [] and values: 0-6 is_approved bool default: false
Design and Architecture
Questions 1. Can we have separate service to manager employee related data? To do only employee related data CRUD for admin 2. Can we have separate codebase for different Web applications? (microfrontend) 3. How can we add Web Analytics that monitors user behavior?
###Human:tell me about Design and Architecture ###assistant: Questions 1. Can we have separate service to manager employee related data? To do only employee related data CRUD for admin 2. Can we have separate codebase for different Web applications? (microfrontend) 3. How can we add Web Analytics that monitors user behavior?
Workflow[dvc-mlflow]
!Model Lifecyle Backend What is DVC? DVC is a version control system for data and machine learning projects. It helps you track changes to your datasets, manage experiments, and collaborate with team members. Here's how you can use DVC in MLOps: 1. Setup DVC: Install DVC: Initialize a DVC project: 2. Version Control for Data: Add data to DVC: Push data to remote storage: 3. Version Control for Models: Add models: Version control models similar to data. 4. Manage Experiments: Use DVC to manage experiments, track metrics, and compare different versions of your model. 5. Collaborate: Use DVC to collaborate with team members. You can use a remote storage like AWS S3 or Google Cloud Storage to store your data and models. 6. Usage with S3: 1. To use S3 with DVC we need to set the remote storage for DVC to the specific S3 bucket. What is mlflow? MLflow is an open-source platform designed to manage the machine learning lifecycle, including experimentation, reproducibility, and deployment. 1. Logging Metrics: MLflow allows you to log and track various metrics (e.g., accuracy, loss) during model training. This helps in monitoring the performance of models over time. 2. Visualization: MLflow provides visualization tools to plot metrics and visualize the performance of models across different experiments. You can compare metrics using interactive plots. 3. Custom Metrics: You can log custom metrics specific to your use case, enabling flexible tracking of performance indicators tailored to your project's needs. 4. Automatic Logging: MLflow integrates with popular machine learning libraries (like TensorFlow, PyTorch, and Scikit-learn) to automatically log metrics and parameters without additional code. Experiment Tracking with MLflow 1. Experiment Organization: MLflow helps organize experiments by grouping runs under experiments. This makes it easier to manage and compare different runs. 2. Run Management: Each run can be tracked individually, capturing parameters, metrics, and artifacts (like models and data) associated with the run. 3. Reproducibility: By recording the parameters, code version, data, and environment used for each run, MLflow ensures experiments are reproducible. 4. Parameter Tracking: Track hyperparameters and configurations used in each run, which helps in understanding the impact of different configurations on model performance. 5. Comparison and Analysis: MLflow allows the comparison of multiple runs side-by-side, facilitating the analysis of which experiment configurations work best. 6. Artifacts Storage: Store and manage artifacts produced during the experiment, such as model files, datasets, and other relevant files, ensuring all components of an experiment are preserved. ml-usecases repo ml-usecases repo is used for model training, tuning and metric collections for various use cases. In this repo we have created a sample project using dvc and mlflow for the yolov8 object detection. We have custom trained the yolov8 model with our own vehicle dataset which is tracked using dvc and the metrics is tracked in mlflow. Here we have used the ultralytics backend which provides built in parameters for tuning like imgsz, epochs, etc. :::info This is just a sample code snippet but this code repo can be utilized and used based on the use cases and the code can be modified for the training, hyper parameter tuning and other cases as needed. The code in the repo can be used as a sample to proceed with for the training or hyper parameter tuning or quantization steps. ::: Workflow Process 1. The training of the usecases can be done in the GPU by procuring access to the dev-sandbox using the instructions in Dev-Sandbox and Onboarding Guide to AI/ML. 2. Once the access is procured the repo below can be cloned in the sandbox and the GPU can be utlized for training. 3. Repo : ml-usecases 4. Branch to checkout from : dvc-mlflow 5. Checkout a new branch 6. Create a new folder for your model 7. Initialize dvc using below command 1. dvc init --subdir 8. Set remote storage 1. dvc remote add -d storage s3://mybucket/dvcstore 9. Add your data to dvc 1. dvc add $folder_name 10. Push your data 1. dvc push 11. Now your data will be store in remote and will be tracked in git. 12. Create a details.txt file -→ refer readme. 13. The run name and the experiment name can be configured based on the usecase for MLFlow 14. Metrics can be viewed in : References ML-Ops
###Human:tell me about Workflow[dvc-mlflow] ###assistant: !Model Lifecyle Backend What is DVC? DVC is a version control system for data and machine learning projects. It helps you track changes to your datasets, manage experiments, and collaborate with team members. Here's how you can use DVC in MLOps: 1. Setup DVC: Install DVC: Initialize a DVC project: 2. Version Control for Data: Add data to DVC: Push data to remote storage: 3. Version Control for Models: Add models: Version control models similar to data. 4. Manage Experiments: Use DVC to manage experiments, track metrics, and compare different versions of your model. 5. Collaborate: Use DVC to collaborate with team members. You can use a remote storage like AWS S3 or Google Cloud Storage to store your data and models. 6. Usage with S3: 1. To use S3 with DVC we need to set the remote storage for DVC to the specific S3 bucket. What is mlflow? MLflow is an open-source platform designed to manage the machine learning lifecycle, including experimentation, reproducibility, and deployment. 1. Logging Metrics: MLflow allows you to log and track various metrics (e.g., accuracy, loss) during model training. This helps in monitoring the performance of models over time. 2. Visualization: MLflow provides visualization tools to plot metrics and visualize the performance of models across different experiments. You can compare metrics using interactive plots. 3. Custom Metrics: You can log custom metrics specific to your use case, enabling flexible tracking of performance indicators tailored to your project's needs. 4. Automatic Logging: MLflow integrates with popular machine learning libraries (like TensorFlow, PyTorch, and Scikit-learn) to automatically log metrics and parameters without additional code. Experiment Tracking with MLflow 1. Experiment Organization: MLflow helps organize experiments by grouping runs under experiments. This makes it easier to manage and compare different runs. 2. Run Management: Each run can be tracked individually, capturing parameters, metrics, and artifacts (like models and data) associated with the run. 3. Reproducibility: By recording the parameters, code version, data, and environment used for each run, MLflow ensures experiments are reproducible. 4. Parameter Tracking: Track hyperparameters and configurations used in each run, which helps in understanding the impact of different configurations on model performance. 5. Comparison and Analysis: MLflow allows the comparison of multiple runs side-by-side, facilitating the analysis of which experiment configurations work best. 6. Artifacts Storage: Store and manage artifacts produced during the experiment, such as model files, datasets, and other relevant files, ensuring all components of an experiment are preserved. ml-usecases repo ml-usecases repo is used for model training, tuning and metric collections for various use cases. In this repo we have created a sample project using dvc and mlflow for the yolov8 object detection. We have custom trained the yolov8 model with our own vehicle dataset which is tracked using dvc and the metrics is tracked in mlflow. Here we have used the ultralytics backend which provides built in parameters for tuning like imgsz, epochs, etc. :::info This is just a sample code snippet but this code repo can be utilized and used based on the use cases and the code can be modified for the training, hyper parameter tuning and other cases as needed. The code in the repo can be used as a sample to proceed with for the training or hyper parameter tuning or quantization steps. ::: Workflow Process 1. The training of the usecases can be done in the GPU by procuring access to the dev-sandbox using the instructions in Dev-Sandbox and Onboarding Guide to AI/ML. 2. Once the access is procured the repo below can be cloned in the sandbox and the GPU can be utlized for training. 3. Repo : ml-usecases 4. Branch to checkout from : dvc-mlflow 5. Checkout a new branch 6. Create a new folder for your model 7. Initialize dvc using below command 1. dvc init --subdir 8. Set remote storage 1. dvc remote add -d storage s3://mybucket/dvcstore 9. Add your data to dvc 1. dvc add $folder_name 10. Push your data 1. dvc push 11. Now your data will be store in remote and will be tracked in git. 12. Create a details.txt file -→ refer readme. 13. The run name and the experiment name can be configured based on the usecase for MLFlow 14. Metrics can be viewed in : References ML-Ops
Work Hours Management
Major Components Authentication Work Hours dashboard Add work hours Recurring meeting Manager Approval dashboard Authentication We can have a separate microservice for the authentication mechanism that handles all login requests. Once a user is authenticated, this service generates a session or token that can be used to authenticate subsequent requests from any of the other backend services. Microsoft Login: User logs in by giving his/her credentials. Login API is hit to the authentication service. Once the Auth code or token is received from Microsoft Azure, JWT token is generated for this user with one-day expiry (tentative). The token is sent in the response payload to the client application. Admin Login: User logs in by giving his/her credentials. Login API is hit to the authentication service. JWT token is generated for this user with one-day expiry (tentative). The token is sent in the response payload to the client application. Work Hours Dashboard This dashboard can contain a list of logged work hours. dropdowns for categories Team, Member and Date. total work hours for the selected date (office + outside office) Add Work Hours This can be a popup component or a separate page. This can contain date picker - to set the date time range - to set from/to time range type of work dropdown - Meeting/Recurring meeting/Escalation/Monitoring/… task description Once the user has added the work hours, it should be sent to Manager Approval. Once approved, this should get added to total work hours, and the dashboard should list this entry. We can have a list of pending approvals, and withdraw/discard option. Recurring Meetings We can have a utility to add the recurring meetings. This can contain multi-select dropdown for the days of occurrence. time duration of occurrence. meeting title and description. Once the recurring meeting is added, it will be sent for Manager approval. Once the Manager has approved or after the auto approval, the work hours for the corresponding days should be automatically added to the user’s work hours dashboard. We can also have an option for cancelling the meeting for one day or cancelling the entire occurrence. Recurring meeting approvals to be sent to Manager every day. Manager Approval Dashboard This dashboard can contain dropdown for Team, Team members. a list of pending approvals and a checkbox/radio button for approve all requests. a list of approved entries. Questions 1. Is an employee allowed to add only that day's work hours? - previous 24 hours 2. How many days work hour data should we maintain? - last 3 months 3. Should we have work hours discard/edit option? yes 4. Any API's available to access greytHR? API List 1. Auth 1. /login/password 2. /login/office365 2. List work hours 1. GET /api/work-hours/list?team=&member=&date= 3. Total work hours 1. GET /api/work-hours/total-hours (returns {office: , outside: }) 2. GET /api/work-hours/total-office-hours 3. GET /api/work-hours/total-remote-hours 4. Meetings list 1. GET /api/work-hours/meetings 2. DELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=&occurrence_value= 1. occur_type = one_day/specific_days/specific_dates/all 2. occur_value = a) one_day = ask user to select the datenDELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=one_day&occurrence_value=2024-06-03 b) specific_days = ask user to select days DELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=specific_days&occurrence_value=Monday,Wednesdaynnc) specific_dates = ask user to select dates (do we need this?) DELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=specific_dates&occurrence_value=2024-06-03,2024-06-05 d) all = no occur value needed DELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=all 5. Add work hours 1. POST /api/work-hours/add, nreq_body = {date: , start_time:, end_time:, work_type:, title:, description:} 6. Pending manager approvals list 1. GET /api/work-hours/approvals?status=pending 7. Completed manager approvals list 1. GET /api/work-hours/approvals?status=completed 8. Approve action 1. POST /api/work-hours/approvals Request body = {action: , team:, member:, ids: } 1. action = approve/approve_all/reject/reject_all 2. ids = [] of meeting ids - optional for approve_all & reject_all
###Human:tell me about Work Hours Management ###assistant: Major Components Authentication Work Hours dashboard Add work hours Recurring meeting Manager Approval dashboard Authentication We can have a separate microservice for the authentication mechanism that handles all login requests. Once a user is authenticated, this service generates a session or token that can be used to authenticate subsequent requests from any of the other backend services. Microsoft Login: User logs in by giving his/her credentials. Login API is hit to the authentication service. Once the Auth code or token is received from Microsoft Azure, JWT token is generated for this user with one-day expiry (tentative). The token is sent in the response payload to the client application. Admin Login: User logs in by giving his/her credentials. Login API is hit to the authentication service. JWT token is generated for this user with one-day expiry (tentative). The token is sent in the response payload to the client application. Work Hours Dashboard This dashboard can contain a list of logged work hours. dropdowns for categories Team, Member and Date. total work hours for the selected date (office + outside office) Add Work Hours This can be a popup component or a separate page. This can contain date picker - to set the date time range - to set from/to time range type of work dropdown - Meeting/Recurring meeting/Escalation/Monitoring/… task description Once the user has added the work hours, it should be sent to Manager Approval. Once approved, this should get added to total work hours, and the dashboard should list this entry. We can have a list of pending approvals, and withdraw/discard option. Recurring Meetings We can have a utility to add the recurring meetings. This can contain multi-select dropdown for the days of occurrence. time duration of occurrence. meeting title and description. Once the recurring meeting is added, it will be sent for Manager approval. Once the Manager has approved or after the auto approval, the work hours for the corresponding days should be automatically added to the user’s work hours dashboard. We can also have an option for cancelling the meeting for one day or cancelling the entire occurrence. Recurring meeting approvals to be sent to Manager every day. Manager Approval Dashboard This dashboard can contain dropdown for Team, Team members. a list of pending approvals and a checkbox/radio button for approve all requests. a list of approved entries. Questions 1. Is an employee allowed to add only that day's work hours? - previous 24 hours 2. How many days work hour data should we maintain? - last 3 months 3. Should we have work hours discard/edit option? yes 4. Any API's available to access greytHR? API List 1. Auth 1. /login/password 2. /login/office365 2. List work hours 1. GET /api/work-hours/list?team=&member=&date= 3. Total work hours 1. GET /api/work-hours/total-hours (returns {office: , outside: }) 2. GET /api/work-hours/total-office-hours 3. GET /api/work-hours/total-remote-hours 4. Meetings list 1. GET /api/work-hours/meetings 2. DELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=&occurrence_value= 1. occur_type = one_day/specific_days/specific_dates/all 2. occur_value = a) one_day = ask user to select the datenDELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=one_day&occurrence_value=2024-06-03 b) specific_days = ask user to select days DELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=specific_days&occurrence_value=Monday,Wednesdaynnc) specific_dates = ask user to select dates (do we need this?) DELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=specific_dates&occurrence_value=2024-06-03,2024-06-05 d) all = no occur value needed DELETE /api/work-hours/meetings/{meeting_id}?occurrence_type=all 5. Add work hours 1. POST /api/work-hours/add, nreq_body = {date: , start_time:, end_time:, work_type:, title:, description:} 6. Pending manager approvals list 1. GET /api/work-hours/approvals?status=pending 7. Completed manager approvals list 1. GET /api/work-hours/approvals?status=completed 8. Approve action 1. POST /api/work-hours/approvals Request body = {action: , team:, member:, ids: } 1. action = approve/approve_all/reject/reject_all 2. ids = [] of meeting ids - optional for approve_all & reject_all
Model Zoo APIS
API design LANDING PAGE Use-case Listing API Explore Page Filters Listing API Filters based on category Models Listing Fetch All models: Filter models based on category: Filter models based on platform: Filter models based on platform and Category: Filter models based on usecases:
###Human:tell me about Model Zoo APIS ###assistant: API design LANDING PAGE Use-case Listing API Explore Page Filters Listing API Filters based on category Models Listing Fetch All models: Filter models based on category: Filter models based on platform: Filter models based on platform and Category: Filter models based on usecases:
Microsoft and Amazon Cognito
To build a user management module with Express that uses Microsoft account (for social login) and Amazon Cognito (for user pool management), Prerequisites 1. Node.js and Express: Ensure you have Node.js and Express installed. 2. Amazon Cognito: Set up a user pool and an app client. 3. Microsoft Azure AD: Set up an application for OAuth2.0 authentication. Step-by-Step Workflow 1. Set Up Your Environment 1. Initialize Your Express Project 2. Create Project Structure 3. Setup Environment Variables Create a file to store sensitive information. MICROSOFT_REDIRECT_URI= JWT_SECRET=your_jwt_secret javascript const express = require('express'); const bodyParser = require('body-parser'); const routes = require('./routes'); require('dotenv').config(); const app = express(); app.use(bodyParser.json()); app.use('/auth', routes); const PORT = process.env.PORT 3000; app.listen(PORT, () => { console.log(); }); javascript const express = require('express'); const { microsoftAuth, microsoftCallback, signup, login } = require('./authController'); const router = express.Router(); router.get('/microsoft', microsoftAuth); router.get('/microsoft/callback', microsoftCallback); router.post('/signup', signup); router.post('/login', login); module.exports = router; javascript const axios = require('axios'); const { CognitoUserPool, CognitoUserAttribute } = require('amazon-cognito-identity-js'); const jwt = require('jsonwebtoken'); const microsoftAuth = (req, res) => { const redirectUri = ; res.redirect(redirectUri); }; const microsoftCallback = async (req, res) => { const code = req.query.code; try { const tokenResponse = await axios.post(, null, { params: { client_id: process.env.MICROSOFT_CLIENT_ID, client_secret: process.env.MICROSOFT_CLIENT_SECRET, grant_type: 'authorization_code', code, redirect_uri: process.env.MICROSOFT_REDIRECT_URI }, headers: { 'Content-Type': 'application/x-www-form-urlencoded' } }); const { access_token, id_token } = tokenResponse.data; const userResponse = await axios.get(, { headers: { Authorization: } }); const user = userResponse.data; // Proceed with Cognito user creation or login const userPool = new CognitoUserPool({ UserPoolId: process.env.COGNITO_USER_POOL_ID, ClientId: process.env.COGNITO_CLIENT_ID }); const attributeList = [ new CognitoUserAttribute({ Name: 'email', Value: user.mail user.userPrincipalName }), new CognitoUserAttribute({ Name: 'name', Value: user.displayName }) ]; userPool.signUp(user.id, 'dummyPassword', attributeList, null, (err, result) => { if (err) { console.error(err); return res.status(500).json({ error: err.message }); } const cognitoUser = result.user; console.log('user name is ' + cognitoUser.getUsername()); const token = jwt.sign({ sub: cognitoUser.getUsername() }, process.env.JWT_SECRET, { expiresIn: '1h' }); res.json({ token }); }); } catch (error) { console.error(error); res.status(500).json({ error: 'Authentication failed' }); } }; const signup = (req, res) => { // Implement the logic for normal signup }; const login = (req, res) => { // Implement the logic for normal login }; module.exports = { microsoftAuth, microsoftCallback, signup, login }; ``/auth/microsoft/auth/microsoft/callback` with an authorization code. 3. Exchange code for tokens: Your server exchanges this code for an access token and ID token. 4. Fetch user profile: Use the access token to fetch the user's profile from Microsoft Graph API. 5. Handle user in Cognito: Check if the user exists in Cognito. If not, sign them up using details fetched from Microsoft. Create a JWT for the user and send it back as a response. 6. Normal signup and login: Implement normal signup and login logic using Cognito’s SDK. Additional Considerations Error Handling: Implement robust error handling and logging. Security: Ensure all tokens and sensitive data are handled securely. Token Management: Handle token refresh and expiration properly. User Experience**: Improve UX with proper redirects and messaging. This workflow provides a detailed roadmap for integrating Microsoft OAuth2.0 for social login and Amazon Cognito for user management in an Express application.
###Human:tell me about Microsoft and Amazon Cognito ###assistant: To build a user management module with Express that uses Microsoft account (for social login) and Amazon Cognito (for user pool management), Prerequisites 1. Node.js and Express: Ensure you have Node.js and Express installed. 2. Amazon Cognito: Set up a user pool and an app client. 3. Microsoft Azure AD: Set up an application for OAuth2.0 authentication. Step-by-Step Workflow 1. Set Up Your Environment 1. Initialize Your Express Project 2. Create Project Structure 3. Setup Environment Variables Create a file to store sensitive information. MICROSOFT_REDIRECT_URI= JWT_SECRET=your_jwt_secret javascript const express = require('express'); const bodyParser = require('body-parser'); const routes = require('./routes'); require('dotenv').config(); const app = express(); app.use(bodyParser.json()); app.use('/auth', routes); const PORT = process.env.PORT 3000; app.listen(PORT, () => { console.log(); }); javascript const express = require('express'); const { microsoftAuth, microsoftCallback, signup, login } = require('./authController'); const router = express.Router(); router.get('/microsoft', microsoftAuth); router.get('/microsoft/callback', microsoftCallback); router.post('/signup', signup); router.post('/login', login); module.exports = router; javascript const axios = require('axios'); const { CognitoUserPool, CognitoUserAttribute } = require('amazon-cognito-identity-js'); const jwt = require('jsonwebtoken'); const microsoftAuth = (req, res) => { const redirectUri = ; res.redirect(redirectUri); }; const microsoftCallback = async (req, res) => { const code = req.query.code; try { const tokenResponse = await axios.post(, null, { params: { client_id: process.env.MICROSOFT_CLIENT_ID, client_secret: process.env.MICROSOFT_CLIENT_SECRET, grant_type: 'authorization_code', code, redirect_uri: process.env.MICROSOFT_REDIRECT_URI }, headers: { 'Content-Type': 'application/x-www-form-urlencoded' } }); const { access_token, id_token } = tokenResponse.data; const userResponse = await axios.get(, { headers: { Authorization: } }); const user = userResponse.data; // Proceed with Cognito user creation or login const userPool = new CognitoUserPool({ UserPoolId: process.env.COGNITO_USER_POOL_ID, ClientId: process.env.COGNITO_CLIENT_ID }); const attributeList = [ new CognitoUserAttribute({ Name: 'email', Value: user.mail user.userPrincipalName }), new CognitoUserAttribute({ Name: 'name', Value: user.displayName }) ]; userPool.signUp(user.id, 'dummyPassword', attributeList, null, (err, result) => { if (err) { console.error(err); return res.status(500).json({ error: err.message }); } const cognitoUser = result.user; console.log('user name is ' + cognitoUser.getUsername()); const token = jwt.sign({ sub: cognitoUser.getUsername() }, process.env.JWT_SECRET, { expiresIn: '1h' }); res.json({ token }); }); } catch (error) { console.error(error); res.status(500).json({ error: 'Authentication failed' }); } }; const signup = (req, res) => { // Implement the logic for normal signup }; const login = (req, res) => { // Implement the logic for normal login }; module.exports = { microsoftAuth, microsoftCallback, signup, login }; ``/auth/microsoft/auth/microsoft/callback` with an authorization code. 3. Exchange code for tokens: Your server exchanges this code for an access token and ID token. 4. Fetch user profile: Use the access token to fetch the user's profile from Microsoft Graph API. 5. Handle user in Cognito: Check if the user exists in Cognito. If not, sign them up using details fetched from Microsoft. Create a JWT for the user and send it back as a response. 6. Normal signup and login: Implement normal signup and login logic using Cognito’s SDK. Additional Considerations Error Handling: Implement robust error handling and logging. Security: Ensure all tokens and sensitive data are handled securely. Token Management: Handle token refresh and expiration properly. User Experience**: Improve UX with proper redirects and messaging. This workflow provides a detailed roadmap for integrating Microsoft OAuth2.0 for social login and Amazon Cognito for user management in an Express application.
Weekly Status Report - 30th May
Executive Summary: Model Zoo: Worked on finalizing the UX for the model zoo website pages. Worked on buiding the UI for the landing page and the model listing page. Built the APIs for the related pages. Added the CLI tool support for model visualization and live prediction with Yolo models and Pytorch models. AI Explorations: Driver Distraction System: Worked on getting a static video prediction for driver distraction system. Documented the steps and the code in a repo with DVC and MLFlow for the model zoo usecase. Speech Recognition System: Worked on the wake word detection and the transcript generation using the Wav2Vec model and Whisper model. CSI: Collected additional data and generated the spectrograms for the data. Working on training the model using different neural networks models to look into the accuracy improvements. Accomplishments: Dev: AI-403: Plan for UI unit testing – ==Closed.== AI-449: Planning for DB data model for the model details and other metrics – ==Closed.== AI-451: Image Classification on SL1680 – ==Closed.== AI-419: Curation of model details for each of the selected models - First 5 – ==Closed.== AI-425: Collection of usecases and training models per usecase – ==Closed.== AI-376: CLI Tools support for Yolo models for prediction and validation – ==Closed.== AI-378: CLI tool support for other ultralytics models and Pytorch models prediction – ==Closed.== AI-379: CLI tool support for model visualization and model downloads – ==Closed.== AI-376: CLI Tools support for Yolo models for prediction and validation - ==Closed.== AI-378: CLI tool support for other ultralytics models and Pytorch models prediction -==Closed.== AI-379: CLI tool support for model visualization and model downloads - ==Closed.== AI-364: UI for model listing/search – ==Closed.== AI-398: Planning of DB data model for tenancy support – ==In Progress.== AI-408: Plan for metrics gathering in RPI and SL1680 – ==In Progress.== AI-413: Setup of SL1860 board for executing the models – ==In Progress.== AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==In Progress.== AI-420: Collection of model metrics for the set of selected models including RPI – ==In Progress.== AI-431: Metrics collection of use case - RPI – ==In Progress.== AI-432: Metrics collection of use case - SL1680 – ==In Progress.== AI-339 : CSI data collection and model training and testing – ==In Progress.== AI-372: UX for other pages (About Us , Contact Us and Welcome guide) – ==In Progress.== AI-359: Backend APIs for landing page with unit testing – ==In Progress== AI-366: Backend APIs for model listing and models search – ==In Progress.== DevOps: AI-354: Configure Grafana dashboards for GPU- ==Closed.== AI-389: CI/CD pipeline for CLI tool Packaging – ==Closed.== AI-391: Pipeline for model training and tuning- ==Closed== AI-385: CI/CD Pipeline for UI and Backend deployment - ==Closed== AI-450: Spin up QA Instance for Model zoo - ==Closed== AI-352: Test Artifacts in gitlab test machine - ==Closed.== AI-387: CI/CD pipeline to push model metrics to DB - ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Website completion with model details page and Signup , login pages Documentation and finelining the usage of the CLI tool. Planning for tenancy and deployment . Execution of 2 usecase models and curation of details and metrics for general and synaptics models. AI Explorations: Driver Distraction System: Execution of the system on Raspberry PI . Testing of the system for a live scenario. Speech Recognition System: Testing of the models using RPI and mic and testing the transcript generation for a live scenario. Documentation of the usecase models with DVC and MLFLow. CSI: Explore and indentify better model with better accuracy. Documentation and logging of the usecase models with DVC and MLFlow. PTO and Vacation: Anees: 29th May Kaushik : 31st May DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-354: Configure Grafana dashboards for GPU- ==Closed.== Completed the set up of dashboards. Resolved the template init failed issue. Deployed cadvisor to monitor container usage. [JENANI SAMPATH] AI-389: CI/CD pipeline for CLI tool Packaging – ==Closed.== Created pipeline for pushing the sdk packages . Tested the working of pipeline. [JENANI SAMPATH] AI-391: Pipeline for model training and tuning- ==Closed== Completed the pipeline flow for model training and tuning using dvc and mlflow. Created a documentation on usage in wiki and share across the team. [JENANI SAMPATH] AI-385: CI/CD Pipeline for UI and Backend deployment - ==Closed== Created pipeline based on environment , where deployment happens dynamically. Modified the terraform scripts and added gitlab variables for pipeline to be triggerred. Dev and Qa branches are protected , only authorized users can commit directly. [JENANI SAMPATH] AI-450: Spin up QA Instance for Model zoo - ==Closed== Spun up QA instance in ohio region. Deployed the front-end and backend services. Installed certs for https and secure connection. [JENANI SAMPATH] AI-352: Test Artifacts in gitlab test machine - ==Closed.== Tested the artifact upload by changing configurations in nginx.conf and gitlab.rb. Triggered pipeline for testing in synaptics build server. Artifacts were successfully uploaded. [SURAJRAM SRINIVASAN] AI-403: Plan for UI unit testing – ==Closed.== Explored and indentified the Jest framework for Next JS and Node JS. Started working on the Unit testing for the UI modules. [SURAJRAM SRINIVASAN] AI-449: Planning for DB data model for the model details and other metrics – ==Closed.== Identified and planned on the datamodel for the DB. Created a clearer data model with the relationships and tested the same on the DB for model details and metrics. [ANEES ASLAM] AI-451: Image Classification on SL1680 – ==Closed.== Able to run GST-AI based Image Classification for Static Video on SL1680. Based on configs, Existing Model is trained for 1000+ labels including Sharks, Cats, Television (which doesn't include identification of Person). The Output are shared on Synaptics AIML Channel for reference. [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JAYAPRAKASH MASILAMANI] AI-419: Curation of model details for each of the selected models - First 5 – ==Closed.== Cleaned the old database and added the qualifiers field in the models table. Pushed the tflite metrics for yolov8-face and FastSAM models by triggering the pipeline. Modified the details of models in accordance with the updated data model schema. [SWATHI RAJAMANI] AI-364: UI for model listing/search – ==Closed.== Updated the UI for landing page and search page. Added cards for models. [SHAIK SHUAIB AFNAN] AI-425: Collection of usecases and training models per usecase – ==Closed.== For the driver distraction use-case made a wiki page and logged all the details related to use-case, model used for training and other details Logged steps on how to perform training and prediction using mlflow. Used dvc for pushing and pulling dataset from s3 bucket. Used the inceptionresnetv2 model as cnn model for implementing the driver distraction image classification system. [SANJAY RAVI] AI-376: CLI Tools support for Yolo models for prediction and validation - ==Closed.== Implemented and updated the SDK with YOLO Models support for both prediction and validations Tested and deployed the SDK for live streaming of video prediction [SANJAY RAVI] AI-378: CLI tool support for other ultralytics models and Pytorch models prediction - ==Closed.== Implemented and updated the SDK with all ultralytics supported model Tested all pytorch model that are available in torch hub for predict and validate functions [SANJAY RAVI] AI-379: CLI tool support for model visualization and model downloads - ==Closed.== Implemented Get model function to download model from AWS S3 using node backend API JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-387: CI/CD pipeline to push model metrics to DB - ==In Progress.== Added the documentation on how to us ethe pipeline for metrics pushing. Need to modify the scripts and pipeline structure according to the new db schema and data models. [ETA - 31/05 ] [SURAJRAM SRINIVASAN] AI-398: Planning of DB data model for tenancy support – ==In Progress.== Looked into and identified the different options for DB data models for tenancy support . Working on finding the suitable data model for the multi tenant architecture. [ETA - 31/05 ] [SURAJRAM SRINIVASAN] AI-408: Plan for metrics gathering in RPI and SL1680 – ==In Progress.== Explored on the profiler tools for gathering metrics from RPI and SL1680. Finalized on the metrics pipeline and working on the metrics collection from SL1680 for a sample usecase [ETA - 31/05 ] [SURAJRAM SRINIVASAN] AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==In Progress.== Identified the different models and the usecases that be brought out for the model zoo site Drafted a table with the general models , the artifacts and the other related details for the general models and sent it out for review. Followed the same steps to get a collection of the usecase models and frame the usecases as well. [ETA - 31/05] [SURAJRAM SRINIVASAN] AI-413: Setup of SL1860 board for executing the models – ==In Progress.== Working on installing the packages and needed depencies in the the SL1680 board. Due to memory constraints certain packages are not getting installed. Working on check on how to install the packages for the CLI tool as well [ETA - 31/05] [ANEES ASLAM] AI-: Image Segmentation for SL1680 – ==In Progress.== Need to reproduce Image segmentation logic used in SR110 for SL1680. Once basic flow is setup, Need to perform optimization, as single inference is 200ms [ETA - 30/05 ] [KEERTHANA SUNDARBABU] AI-359: Backend APIs for landing page with unit testingn – ==In Progress.== Modified postgres DB with following changes Added extra columns category Modified domain as Audio, Vision Migrated the changes to local postgres db Loaded up the local db with latest data by updating the shell scripts Modified the getModels and getModelById Apis with latest changes in the schema Done with Usecase API on landing page along with url Parallelly working on unit tests for the backen APIs [ETA - 31/05 ] [KEERTHANA SUNDARBABU] AI-366: Backend APIs for model listing and models searchn – ==In Progress.== Almost done with model list APIs All the combinational filters are incorporated to filter the model filtering based on platforms and category Working on final modifications to fix with the latest changes [ETA - 31/05 ] [JAYAPRAKASH MASILAMANI] AI-420: Collection of model metrics for the set of selected models including RPI – ==In Progress.== Implemented the pytorch profiling metrics script in rpi and was able to generate the summary and run-time layer analysis data for the yolov8-face model (.pt) model. Modified the metrics generating script for both pytorch profiling and tflite profiling for the updated data models. Working on the insert API which is needed for the pushing of model metrics in the database via pipeline. [ETA - 31/05 ] [JAYAPRAKASH MASILAMANI] AI-431: Metrics collection of use case - RPI – ==In Progress.== Collected the different format of models for Driver distraction Image classification - InceptionResNetV2. Pushed the model into S3 bucket. Working on pushing the model metrics of the InceptionResNetV2 model. [KAUSHIK VISHAL SARAVANAN] AI-432: Metrics collection of use case - SL1680n – ==In Progress.== converted a yolov8 image segmentation tflite file to synap format and moved it to SL1680. collected the inference metrics for the yolov8 image segmentation model on SL1680. working on collecting metrics for other models. [ETA - 31/5 ] [KAUSHIK VISHAL SARAVANAN] AI-339 : CSI data collection and model training and testingn – ==In Progress.== Trained and tested ResNet50 and MobileNetv2 on the spectrogram data obtained by processing the data from all four rooms. Working on improving accuracy on both ResNet and MobileNet models. Working on normalizing the data obtained and using that to train TabNet model and also test it with test data from different rooms. [ETA - 31/5 ] [SHAIK SHUAIB AFNAN] AI-372: UX for other pages (About Us , Contact Us and Welcome guide) – ==In Progress.== Completed initial wireframes for About Us, Contact Us and Welcome guide pages in Figma and looking for more design ideas and changes. [ETA - 31/05 ]
###Human:tell me about Weekly Status Report - 30th May ###assistant: Executive Summary: Model Zoo: Worked on finalizing the UX for the model zoo website pages. Worked on buiding the UI for the landing page and the model listing page. Built the APIs for the related pages. Added the CLI tool support for model visualization and live prediction with Yolo models and Pytorch models. AI Explorations: Driver Distraction System: Worked on getting a static video prediction for driver distraction system. Documented the steps and the code in a repo with DVC and MLFlow for the model zoo usecase. Speech Recognition System: Worked on the wake word detection and the transcript generation using the Wav2Vec model and Whisper model. CSI: Collected additional data and generated the spectrograms for the data. Working on training the model using different neural networks models to look into the accuracy improvements. Accomplishments: Dev: AI-403: Plan for UI unit testing – ==Closed.== AI-449: Planning for DB data model for the model details and other metrics – ==Closed.== AI-451: Image Classification on SL1680 – ==Closed.== AI-419: Curation of model details for each of the selected models - First 5 – ==Closed.== AI-425: Collection of usecases and training models per usecase – ==Closed.== AI-376: CLI Tools support for Yolo models for prediction and validation – ==Closed.== AI-378: CLI tool support for other ultralytics models and Pytorch models prediction – ==Closed.== AI-379: CLI tool support for model visualization and model downloads – ==Closed.== AI-376: CLI Tools support for Yolo models for prediction and validation - ==Closed.== AI-378: CLI tool support for other ultralytics models and Pytorch models prediction -==Closed.== AI-379: CLI tool support for model visualization and model downloads - ==Closed.== AI-364: UI for model listing/search – ==Closed.== AI-398: Planning of DB data model for tenancy support – ==In Progress.== AI-408: Plan for metrics gathering in RPI and SL1680 – ==In Progress.== AI-413: Setup of SL1860 board for executing the models – ==In Progress.== AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==In Progress.== AI-420: Collection of model metrics for the set of selected models including RPI – ==In Progress.== AI-431: Metrics collection of use case - RPI – ==In Progress.== AI-432: Metrics collection of use case - SL1680 – ==In Progress.== AI-339 : CSI data collection and model training and testing – ==In Progress.== AI-372: UX for other pages (About Us , Contact Us and Welcome guide) – ==In Progress.== AI-359: Backend APIs for landing page with unit testing – ==In Progress== AI-366: Backend APIs for model listing and models search – ==In Progress.== DevOps: AI-354: Configure Grafana dashboards for GPU- ==Closed.== AI-389: CI/CD pipeline for CLI tool Packaging – ==Closed.== AI-391: Pipeline for model training and tuning- ==Closed== AI-385: CI/CD Pipeline for UI and Backend deployment - ==Closed== AI-450: Spin up QA Instance for Model zoo - ==Closed== AI-352: Test Artifacts in gitlab test machine - ==Closed.== AI-387: CI/CD pipeline to push model metrics to DB - ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Website completion with model details page and Signup , login pages Documentation and finelining the usage of the CLI tool. Planning for tenancy and deployment . Execution of 2 usecase models and curation of details and metrics for general and synaptics models. AI Explorations: Driver Distraction System: Execution of the system on Raspberry PI . Testing of the system for a live scenario. Speech Recognition System: Testing of the models using RPI and mic and testing the transcript generation for a live scenario. Documentation of the usecase models with DVC and MLFLow. CSI: Explore and indentify better model with better accuracy. Documentation and logging of the usecase models with DVC and MLFlow. PTO and Vacation: Anees: 29th May Kaushik : 31st May DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-354: Configure Grafana dashboards for GPU- ==Closed.== Completed the set up of dashboards. Resolved the template init failed issue. Deployed cadvisor to monitor container usage. [JENANI SAMPATH] AI-389: CI/CD pipeline for CLI tool Packaging – ==Closed.== Created pipeline for pushing the sdk packages . Tested the working of pipeline. [JENANI SAMPATH] AI-391: Pipeline for model training and tuning- ==Closed== Completed the pipeline flow for model training and tuning using dvc and mlflow. Created a documentation on usage in wiki and share across the team. [JENANI SAMPATH] AI-385: CI/CD Pipeline for UI and Backend deployment - ==Closed== Created pipeline based on environment , where deployment happens dynamically. Modified the terraform scripts and added gitlab variables for pipeline to be triggerred. Dev and Qa branches are protected , only authorized users can commit directly. [JENANI SAMPATH] AI-450: Spin up QA Instance for Model zoo - ==Closed== Spun up QA instance in ohio region. Deployed the front-end and backend services. Installed certs for https and secure connection. [JENANI SAMPATH] AI-352: Test Artifacts in gitlab test machine - ==Closed.== Tested the artifact upload by changing configurations in nginx.conf and gitlab.rb. Triggered pipeline for testing in synaptics build server. Artifacts were successfully uploaded. [SURAJRAM SRINIVASAN] AI-403: Plan for UI unit testing – ==Closed.== Explored and indentified the Jest framework for Next JS and Node JS. Started working on the Unit testing for the UI modules. [SURAJRAM SRINIVASAN] AI-449: Planning for DB data model for the model details and other metrics – ==Closed.== Identified and planned on the datamodel for the DB. Created a clearer data model with the relationships and tested the same on the DB for model details and metrics. [ANEES ASLAM] AI-451: Image Classification on SL1680 – ==Closed.== Able to run GST-AI based Image Classification for Static Video on SL1680. Based on configs, Existing Model is trained for 1000+ labels including Sharks, Cats, Television (which doesn't include identification of Person). The Output are shared on Synaptics AIML Channel for reference. [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JAYAPRAKASH MASILAMANI] AI-419: Curation of model details for each of the selected models - First 5 – ==Closed.== Cleaned the old database and added the qualifiers field in the models table. Pushed the tflite metrics for yolov8-face and FastSAM models by triggering the pipeline. Modified the details of models in accordance with the updated data model schema. [SWATHI RAJAMANI] AI-364: UI for model listing/search – ==Closed.== Updated the UI for landing page and search page. Added cards for models. [SHAIK SHUAIB AFNAN] AI-425: Collection of usecases and training models per usecase – ==Closed.== For the driver distraction use-case made a wiki page and logged all the details related to use-case, model used for training and other details Logged steps on how to perform training and prediction using mlflow. Used dvc for pushing and pulling dataset from s3 bucket. Used the inceptionresnetv2 model as cnn model for implementing the driver distraction image classification system. [SANJAY RAVI] AI-376: CLI Tools support for Yolo models for prediction and validation - ==Closed.== Implemented and updated the SDK with YOLO Models support for both prediction and validations Tested and deployed the SDK for live streaming of video prediction [SANJAY RAVI] AI-378: CLI tool support for other ultralytics models and Pytorch models prediction - ==Closed.== Implemented and updated the SDK with all ultralytics supported model Tested all pytorch model that are available in torch hub for predict and validate functions [SANJAY RAVI] AI-379: CLI tool support for model visualization and model downloads - ==Closed.== Implemented Get model function to download model from AWS S3 using node backend API JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-387: CI/CD pipeline to push model metrics to DB - ==In Progress.== Added the documentation on how to us ethe pipeline for metrics pushing. Need to modify the scripts and pipeline structure according to the new db schema and data models. [ETA - 31/05 ] [SURAJRAM SRINIVASAN] AI-398: Planning of DB data model for tenancy support – ==In Progress.== Looked into and identified the different options for DB data models for tenancy support . Working on finding the suitable data model for the multi tenant architecture. [ETA - 31/05 ] [SURAJRAM SRINIVASAN] AI-408: Plan for metrics gathering in RPI and SL1680 – ==In Progress.== Explored on the profiler tools for gathering metrics from RPI and SL1680. Finalized on the metrics pipeline and working on the metrics collection from SL1680 for a sample usecase [ETA - 31/05 ] [SURAJRAM SRINIVASAN] AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==In Progress.== Identified the different models and the usecases that be brought out for the model zoo site Drafted a table with the general models , the artifacts and the other related details for the general models and sent it out for review. Followed the same steps to get a collection of the usecase models and frame the usecases as well. [ETA - 31/05] [SURAJRAM SRINIVASAN] AI-413: Setup of SL1860 board for executing the models – ==In Progress.== Working on installing the packages and needed depencies in the the SL1680 board. Due to memory constraints certain packages are not getting installed. Working on check on how to install the packages for the CLI tool as well [ETA - 31/05] [ANEES ASLAM] AI-: Image Segmentation for SL1680 – ==In Progress.== Need to reproduce Image segmentation logic used in SR110 for SL1680. Once basic flow is setup, Need to perform optimization, as single inference is 200ms [ETA - 30/05 ] [KEERTHANA SUNDARBABU] AI-359: Backend APIs for landing page with unit testingn – ==In Progress.== Modified postgres DB with following changes Added extra columns category Modified domain as Audio, Vision Migrated the changes to local postgres db Loaded up the local db with latest data by updating the shell scripts Modified the getModels and getModelById Apis with latest changes in the schema Done with Usecase API on landing page along with url Parallelly working on unit tests for the backen APIs [ETA - 31/05 ] [KEERTHANA SUNDARBABU] AI-366: Backend APIs for model listing and models searchn – ==In Progress.== Almost done with model list APIs All the combinational filters are incorporated to filter the model filtering based on platforms and category Working on final modifications to fix with the latest changes [ETA - 31/05 ] [JAYAPRAKASH MASILAMANI] AI-420: Collection of model metrics for the set of selected models including RPI – ==In Progress.== Implemented the pytorch profiling metrics script in rpi and was able to generate the summary and run-time layer analysis data for the yolov8-face model (.pt) model. Modified the metrics generating script for both pytorch profiling and tflite profiling for the updated data models. Working on the insert API which is needed for the pushing of model metrics in the database via pipeline. [ETA - 31/05 ] [JAYAPRAKASH MASILAMANI] AI-431: Metrics collection of use case - RPI – ==In Progress.== Collected the different format of models for Driver distraction Image classification - InceptionResNetV2. Pushed the model into S3 bucket. Working on pushing the model metrics of the InceptionResNetV2 model. [KAUSHIK VISHAL SARAVANAN] AI-432: Metrics collection of use case - SL1680n – ==In Progress.== converted a yolov8 image segmentation tflite file to synap format and moved it to SL1680. collected the inference metrics for the yolov8 image segmentation model on SL1680. working on collecting metrics for other models. [ETA - 31/5 ] [KAUSHIK VISHAL SARAVANAN] AI-339 : CSI data collection and model training and testingn – ==In Progress.== Trained and tested ResNet50 and MobileNetv2 on the spectrogram data obtained by processing the data from all four rooms. Working on improving accuracy on both ResNet and MobileNet models. Working on normalizing the data obtained and using that to train TabNet model and also test it with test data from different rooms. [ETA - 31/5 ] [SHAIK SHUAIB AFNAN] AI-372: UX for other pages (About Us , Contact Us and Welcome guide) – ==In Progress.== Completed initial wireframes for About Us, Contact Us and Welcome guide pages in Figma and looking for more design ideas and changes. [ETA - 31/05 ]
Driver-Distraction(Model Zoo)
Overview Driver distraction poses a significant risk on the roads, contributing to countless accidents and fatalities each year. To mitigate this risk, advanced driver assistance systems (ADAS) have been developed, incorporating cutting-edge technologies like convolutional neural networks (CNNs) for real-time classification of driver behavior. In this scenario, a CNN model mainly InceptionResnetV2 is deployed to classify driver actions into ten distinct classes. Each class represents a different type of distraction, ranging from harmless activities like safe driving and operating the radio to more hazardous behaviors like texting or talking on the phone. The CNN model operates by analyzing video feed from inside the vehicle, capturing the driver's actions and movements in real-time. By leveraging deep learning techniques, the model learns to recognize patterns and features associated with each class, enabling it to accurately identify instances of driver distraction. For example, when the model detects that the driver is using their phone, it can classify the behavior into one of four categories: texting - right, texting - left, talking on the phone - right, or talking on the phone - left. Similarly, it can distinguish between other distractions such as drinking, reaching behind, or engaging in hair and makeup. Once a distraction is identified, the system can issue warnings to the driver, alerting them to refocus their attention on the road. In more advanced implementations, the system can even take corrective actions, such as activating lane-keeping assistance or adaptive cruise control to help maintain safe driving behavior. Model The cnn model used to implement the driver distraction image classification process is InceptionResNetV2. !InceptionResNetV2 Architecture Inception-ResNet-v2 is a convolutional neural architecture that builds on the Inception family of architectures but incorporates residual connections. It was introduced by Szegedy et al. as an extension of the original Inception architecture. Let me break it down for you: 1. Inception Architecture: The Inception architecture, also known as GoogLeNet, is designed to handle the trade-off between depth and computational efficiency in deep neural networks. It achieves this by using multiple parallel convolutional filters of different sizes and then concatenating their outputs. This allows the network to capture features at different scales. 2. Residual Connections: Residual connections, popularized by the ResNet architecture, address the vanishing gradient problem. In traditional neural networks, as the depth increases, gradients tend to become very small during backpropagation, making it difficult for the network to learn. Residual connections allow the network to "skip" layers by adding the original input to the output of a deeper layer. This helps gradients flow more easily during training. 3. Inception-ResNet-v2: Inception-ResNet-v2 combines the strengths of both Inception and ResNet. It replaces the filter concatenation stage of the Inception architecture with residual connections. This results in a deep neural network that can capture complex features while maintaining computational efficiency. 4. Applications: Inception-ResNet-v2 has been used for various tasks, including image classification, fine-grained image recognition, machine translation, and more. Researchers have found it to be effective across different domains Data Collection The Dataset used for training the model is state farm dataset from kaggle which can be downloaded from the below link. in our case we are going to push the dataset into Amazon S3 bucket and access it from there after downloading in-order to push and pull the dataset we will use DVC. Why Use DVC DVC (Data Version Control) facilitates efficient management of large datasets in machine learning by integrating with Amazon S3. It optimizes data versioning, ensuring reproducibility and collaboration by tracking changes at the file level without duplicating data. DVC's seamless integration with S3 enables parallel data fetching, enhancing performance for big data tasks. This integration also supports data pipelines, allowing for robust and scalable workflows. Ultimately, DVC empowers teams to handle complex datasets effectively, ensuring transparency and efficiency in machine learning projects. Model Building & Training The model building & training involves steps like 1. Data Preprocessing: load the images from a specified directory after pulling it from amazon S3 using DVC this can be done by running the command dvc pull if there an .dvc file inside the root directory. now load each image and resize them to a fixed dimension (299x299), And store their file paths along with corresponding labels in a DataFrame. Split the dataset into training and testing sets using stratified sampling, ensuring that the distribution of labels is similar in both sets. Then Apply Data augmentation techniques like rescaling, shearing, zooming, and horizontal flipping to the training images to increase dataset variation and improve model generalization. 1. Model Building: Load the pre-trained InceptionResNetV2 model, excluding its top layer, and freeze all its layers except the last two. A custom sequential model is created by adding the pre-trained model, a global average pooling layer, and a fully connected dense layer with a softmax activation function. The model is compiled using a categorical cross-entropy loss function and an Adam optimizer with a specified learning rate. 2. Model Training: The compiled model is trained using the training data generator () for a specified number of epochs. During training, the model's performance is evaluated on the validation set provided by . The training history, including loss and accuracy metrics for each epoch, is stored in the variable. Model Saving and Mlflow 1. Extracting Metrics: After training is done retrieve the final training and validation loss values, as well as the corresponding accuracy values, from the training history and log these metrics into mlflow. 2. Model Evaluation: The trained model is evaluated using the test generator to obtain the loss and accuracy metrics on the test dataset. Then test accuracy is retrieved. 3. Logging Experiment Data: Various parameters and metrics related to the training process and model are logged using MLflow, a tool for tracking and managing machine learning experiments. Parameters such as batch size, optimizer type, number of epochs, learning rate, and image dimensions are logged. Metrics including training accuracy, validation accuracy, test accuracy, training loss, validation loss, and test loss are logged. Additionally, the sizes of the training, validation, and test datasets are logged. 4. Saving the Model: Save the trained model in various formats: , , and a TensorFlow Lite model. The TensorFlow Lite model is further optimized using int8 quantization, which reduces the model size and allows for faster inference on devices with lower computational resources. Save the quantized TensorFlow Lite model as a separate file. 5. Logging Model Artifacts: * All saved model artifacts, including the model files and TensorFlow Lite files, are logged using MLflow. Prediction For prediction you can use any model from .keras, .h5, tflite, int8 quantized tflite or keras saved_model by loading it. Once loaded preprocess the input image into 299x299 dimensions and convert it into an array of pixels values and directly pass it into the model to get predictions, The predictions are float values and we have to select the max value out of all and get the index of that value and map it to the corresponding class to get the output. You can also log the predictions in the mlflow.
###Human:tell me about Driver-Distraction(Model Zoo) ###assistant: Overview Driver distraction poses a significant risk on the roads, contributing to countless accidents and fatalities each year. To mitigate this risk, advanced driver assistance systems (ADAS) have been developed, incorporating cutting-edge technologies like convolutional neural networks (CNNs) for real-time classification of driver behavior. In this scenario, a CNN model mainly InceptionResnetV2 is deployed to classify driver actions into ten distinct classes. Each class represents a different type of distraction, ranging from harmless activities like safe driving and operating the radio to more hazardous behaviors like texting or talking on the phone. The CNN model operates by analyzing video feed from inside the vehicle, capturing the driver's actions and movements in real-time. By leveraging deep learning techniques, the model learns to recognize patterns and features associated with each class, enabling it to accurately identify instances of driver distraction. For example, when the model detects that the driver is using their phone, it can classify the behavior into one of four categories: texting - right, texting - left, talking on the phone - right, or talking on the phone - left. Similarly, it can distinguish between other distractions such as drinking, reaching behind, or engaging in hair and makeup. Once a distraction is identified, the system can issue warnings to the driver, alerting them to refocus their attention on the road. In more advanced implementations, the system can even take corrective actions, such as activating lane-keeping assistance or adaptive cruise control to help maintain safe driving behavior. Model The cnn model used to implement the driver distraction image classification process is InceptionResNetV2. !InceptionResNetV2 Architecture Inception-ResNet-v2 is a convolutional neural architecture that builds on the Inception family of architectures but incorporates residual connections. It was introduced by Szegedy et al. as an extension of the original Inception architecture. Let me break it down for you: 1. Inception Architecture: The Inception architecture, also known as GoogLeNet, is designed to handle the trade-off between depth and computational efficiency in deep neural networks. It achieves this by using multiple parallel convolutional filters of different sizes and then concatenating their outputs. This allows the network to capture features at different scales. 2. Residual Connections: Residual connections, popularized by the ResNet architecture, address the vanishing gradient problem. In traditional neural networks, as the depth increases, gradients tend to become very small during backpropagation, making it difficult for the network to learn. Residual connections allow the network to "skip" layers by adding the original input to the output of a deeper layer. This helps gradients flow more easily during training. 3. Inception-ResNet-v2: Inception-ResNet-v2 combines the strengths of both Inception and ResNet. It replaces the filter concatenation stage of the Inception architecture with residual connections. This results in a deep neural network that can capture complex features while maintaining computational efficiency. 4. Applications: Inception-ResNet-v2 has been used for various tasks, including image classification, fine-grained image recognition, machine translation, and more. Researchers have found it to be effective across different domains Data Collection The Dataset used for training the model is state farm dataset from kaggle which can be downloaded from the below link. in our case we are going to push the dataset into Amazon S3 bucket and access it from there after downloading in-order to push and pull the dataset we will use DVC. Why Use DVC DVC (Data Version Control) facilitates efficient management of large datasets in machine learning by integrating with Amazon S3. It optimizes data versioning, ensuring reproducibility and collaboration by tracking changes at the file level without duplicating data. DVC's seamless integration with S3 enables parallel data fetching, enhancing performance for big data tasks. This integration also supports data pipelines, allowing for robust and scalable workflows. Ultimately, DVC empowers teams to handle complex datasets effectively, ensuring transparency and efficiency in machine learning projects. Model Building & Training The model building & training involves steps like 1. Data Preprocessing: load the images from a specified directory after pulling it from amazon S3 using DVC this can be done by running the command dvc pull if there an .dvc file inside the root directory. now load each image and resize them to a fixed dimension (299x299), And store their file paths along with corresponding labels in a DataFrame. Split the dataset into training and testing sets using stratified sampling, ensuring that the distribution of labels is similar in both sets. Then Apply Data augmentation techniques like rescaling, shearing, zooming, and horizontal flipping to the training images to increase dataset variation and improve model generalization. 1. Model Building: Load the pre-trained InceptionResNetV2 model, excluding its top layer, and freeze all its layers except the last two. A custom sequential model is created by adding the pre-trained model, a global average pooling layer, and a fully connected dense layer with a softmax activation function. The model is compiled using a categorical cross-entropy loss function and an Adam optimizer with a specified learning rate. 2. Model Training: The compiled model is trained using the training data generator () for a specified number of epochs. During training, the model's performance is evaluated on the validation set provided by . The training history, including loss and accuracy metrics for each epoch, is stored in the variable. Model Saving and Mlflow 1. Extracting Metrics: After training is done retrieve the final training and validation loss values, as well as the corresponding accuracy values, from the training history and log these metrics into mlflow. 2. Model Evaluation: The trained model is evaluated using the test generator to obtain the loss and accuracy metrics on the test dataset. Then test accuracy is retrieved. 3. Logging Experiment Data: Various parameters and metrics related to the training process and model are logged using MLflow, a tool for tracking and managing machine learning experiments. Parameters such as batch size, optimizer type, number of epochs, learning rate, and image dimensions are logged. Metrics including training accuracy, validation accuracy, test accuracy, training loss, validation loss, and test loss are logged. Additionally, the sizes of the training, validation, and test datasets are logged. 4. Saving the Model: Save the trained model in various formats: , , and a TensorFlow Lite model. The TensorFlow Lite model is further optimized using int8 quantization, which reduces the model size and allows for faster inference on devices with lower computational resources. Save the quantized TensorFlow Lite model as a separate file. 5. Logging Model Artifacts: * All saved model artifacts, including the model files and TensorFlow Lite files, are logged using MLflow. Prediction For prediction you can use any model from .keras, .h5, tflite, int8 quantized tflite or keras saved_model by loading it. Once loaded preprocess the input image into 299x299 dimensions and convert it into an array of pixels values and directly pass it into the model to get predictions, The predictions are float values and we have to select the max value out of all and get the index of that value and map it to the corresponding class to get the output. You can also log the predictions in the mlflow.
Compile the model
Here are the steps to convert a tflite model into synap model to install the model on the device. The model conversion can be done in local or build server. Download the docker image of the SyNAP toolkit on the host:n Install an alias in the shell of the host to run the SyNAP Toolkit container:nn You can get help on the available toolkit commands by running the below line: Convert the model with the following command: Put chip name as SL1680 and replace the mymodel.tflite file with our model file. ‘converted’ is the name of the directory where the compiled model file will be saved. Model is compiled successfully now. Copy the model to the device through ssh with the help of scp command. nnNow you can run inference to verify whether the model is working fine or not using the following steps. ssh to the sl1680 root or access the root fs via serial console Go the directory where your model file (model.synap) is copied. Now run the command → This will load the model and run inference for a random image. The inference time and input and output tensors’ parameters will be printed. Here is a sample output.n Example Conversion: > RESNET50 Model n(Image classification residual networks 50 Layers) TFLite : 99 Mb SyNap : 50 Mbn cmd : nIt logs target_device (SL1680), delegate (NPU), each model layers infon model_info.txt 425233
###Human:tell me about Compile the model ###assistant: Here are the steps to convert a tflite model into synap model to install the model on the device. The model conversion can be done in local or build server. Download the docker image of the SyNAP toolkit on the host:n Install an alias in the shell of the host to run the SyNAP Toolkit container:nn You can get help on the available toolkit commands by running the below line: Convert the model with the following command: Put chip name as SL1680 and replace the mymodel.tflite file with our model file. ‘converted’ is the name of the directory where the compiled model file will be saved. Model is compiled successfully now. Copy the model to the device through ssh with the help of scp command. nnNow you can run inference to verify whether the model is working fine or not using the following steps. ssh to the sl1680 root or access the root fs via serial console Go the directory where your model file (model.synap) is copied. Now run the command → This will load the model and run inference for a random image. The inference time and input and output tensors’ parameters will be printed. Here is a sample output.n Example Conversion: > RESNET50 Model n(Image classification residual networks 50 Layers) TFLite : 99 Mb SyNap : 50 Mbn cmd : nIt logs target_device (SL1680), delegate (NPU), each model layers infon model_info.txt 425233
CSI - TabNet model prediction
What is TabNet? TabNet is a DNN (Deep Neural Network) architecture specifically engineered for tabular data. Integrating attention mechanisms with decision trees, it dynamically learns feature importance through iterative selection, optimizing both interpretability and predictive performance. Its innovation lies in adaptive sparse attention, allowing selective focus on relevant features during training and inference, which effectively handles high-dimensional, heterogeneous data while mitigating overfitting. Self-attentive layers facilitate complex pattern recognition, enabling effective representation learning. With a balance between model complexity and interpretability, TabNet is widely applicable across classification and regression tasks. TabNet Architecture How does TabNet work? 1. Feature Transformation: TabNet begins by transforming the input features using a shared embedding layer. This layer helps to represent categorical variables as continuous embeddings, which can capture relationships between categories more effectively. 2. Decision Making with Decision Blocks: The core component of TabNet is the decision block. A decision block consists of multiple sequential steps, each of which performs feature selection and decision making. At each step, the model selects a subset of features to pass to the next step based on their importance to the task at hand. 3. Feature Selection using Attention Mechanism: Within each decision step, TabNet employs an attention mechanism to dynamically select which features are most relevant for the current prediction. This attention mechanism allows the model to focus on informative features while ignoring irrelevant ones, improving both performance and interpretability. 4. Masked Feature Reuse: TabNet also introduces the concept of masked feature reuse, where information from previous decision steps is selectively reused to aid in subsequent decisions. This helps the model to effectively leverage information from different parts of the input space and improve overall performance. 5. Final Prediction: After processing through multiple decision steps, TabNet combines the final selected features and passes them through a fully connected layer to produce the final prediction. STEPS INVOLVED IN TRAINING THE MODEL 1. Pre-Processing the obtained data (csv) Removed all the columns except the data column and computed magnitude for the real and imaginary parts. After than we computed mean for the magnitude and stored it in a separate csv with label (0→no person or 1→person) 2. The data was manually spilt into training and testing data 3. Using the training data The data from these folders are loaded into separate arrays, , for mean values, and , for labels, respectively. These arrays are then concatenated to form the complete feature set and label set . 4. Data Preprocessing If the features are one-dimensional, meaning they are represented as a single column, they are reshaped into a two-dimensional array where each row represents a sample, and each column represents a feature. 5. Model Initialization: A model is initialized with specific parameters: Optimizer: Adam optimizer with a learning rate of 0.02 ( ). Learning rate scheduler: StepLR scheduler, which decreases the learning rate by a factor of 0.9 every 50 epochs. Mask type: 'entmax', which specifies the type of sparsity-inducing mask to use. Sparse regularization is set to 0, indicating no regularization for sparsity. STEPS INVOLVED IN TESTING THE MODEL We load the saved model, and we use the testing data to, 1. Preparing the Features We extract the feature values ('mean') from the loaded data and reshape them to match the input shape expected by the model. TabNet expects input features to be in the shape (batch_size, num_features), hence we reshape the features to (num_samples, 1) as there is only one feature in this case. 2. Making Predictions We use the loaded model to make predictions on the test data. The method returns the predicted probabilities for each class. Since this is a binary classification problem, we are interested in the probability of belonging to class 1 (i.e., the second column), hence we extract the probabilities for class 1 ([:, 1]). 3. Converting Probabilities to Binary Labels We convert the predicted probabilities into binary labels by thresholding. Here, we use a threshold of 0.5, meaning if the predicted probability is greater than 0.5, we classify it as belonging to class 1 (person), otherwise as class 0 (no person). 4. Calculating Accuracy We compare the binary predictions with the true labels and calculate the accuracy as the proportion of correct predictions. References: Information about the data obtained - Information about the TabNet - * How to obtain data from ESP32S3 -
###Human:tell me about CSI - TabNet model prediction ###assistant: What is TabNet? TabNet is a DNN (Deep Neural Network) architecture specifically engineered for tabular data. Integrating attention mechanisms with decision trees, it dynamically learns feature importance through iterative selection, optimizing both interpretability and predictive performance. Its innovation lies in adaptive sparse attention, allowing selective focus on relevant features during training and inference, which effectively handles high-dimensional, heterogeneous data while mitigating overfitting. Self-attentive layers facilitate complex pattern recognition, enabling effective representation learning. With a balance between model complexity and interpretability, TabNet is widely applicable across classification and regression tasks. TabNet Architecture How does TabNet work? 1. Feature Transformation: TabNet begins by transforming the input features using a shared embedding layer. This layer helps to represent categorical variables as continuous embeddings, which can capture relationships between categories more effectively. 2. Decision Making with Decision Blocks: The core component of TabNet is the decision block. A decision block consists of multiple sequential steps, each of which performs feature selection and decision making. At each step, the model selects a subset of features to pass to the next step based on their importance to the task at hand. 3. Feature Selection using Attention Mechanism: Within each decision step, TabNet employs an attention mechanism to dynamically select which features are most relevant for the current prediction. This attention mechanism allows the model to focus on informative features while ignoring irrelevant ones, improving both performance and interpretability. 4. Masked Feature Reuse: TabNet also introduces the concept of masked feature reuse, where information from previous decision steps is selectively reused to aid in subsequent decisions. This helps the model to effectively leverage information from different parts of the input space and improve overall performance. 5. Final Prediction: After processing through multiple decision steps, TabNet combines the final selected features and passes them through a fully connected layer to produce the final prediction. STEPS INVOLVED IN TRAINING THE MODEL 1. Pre-Processing the obtained data (csv) Removed all the columns except the data column and computed magnitude for the real and imaginary parts. After than we computed mean for the magnitude and stored it in a separate csv with label (0→no person or 1→person) 2. The data was manually spilt into training and testing data 3. Using the training data The data from these folders are loaded into separate arrays, , for mean values, and , for labels, respectively. These arrays are then concatenated to form the complete feature set and label set . 4. Data Preprocessing If the features are one-dimensional, meaning they are represented as a single column, they are reshaped into a two-dimensional array where each row represents a sample, and each column represents a feature. 5. Model Initialization: A model is initialized with specific parameters: Optimizer: Adam optimizer with a learning rate of 0.02 ( ). Learning rate scheduler: StepLR scheduler, which decreases the learning rate by a factor of 0.9 every 50 epochs. Mask type: 'entmax', which specifies the type of sparsity-inducing mask to use. Sparse regularization is set to 0, indicating no regularization for sparsity. STEPS INVOLVED IN TESTING THE MODEL We load the saved model, and we use the testing data to, 1. Preparing the Features We extract the feature values ('mean') from the loaded data and reshape them to match the input shape expected by the model. TabNet expects input features to be in the shape (batch_size, num_features), hence we reshape the features to (num_samples, 1) as there is only one feature in this case. 2. Making Predictions We use the loaded model to make predictions on the test data. The method returns the predicted probabilities for each class. Since this is a binary classification problem, we are interested in the probability of belonging to class 1 (i.e., the second column), hence we extract the probabilities for class 1 ([:, 1]). 3. Converting Probabilities to Binary Labels We convert the predicted probabilities into binary labels by thresholding. Here, we use a threshold of 0.5, meaning if the predicted probability is greater than 0.5, we classify it as belonging to class 1 (person), otherwise as class 0 (no person). 4. Calculating Accuracy We compare the binary predictions with the true labels and calculate the accuracy as the proportion of correct predictions. References: Information about the data obtained - Information about the TabNet - * How to obtain data from ESP32S3 -
Location service
Repository: Product / astral_geolocation · GitLab (embedur.local) Services being used for Astral Geolocation: New Service: Location Service Minio Service Changes to Existing Services: Rest Web Db_Schema manager Flow: Algorithm: 1. Location API is hit through web by clicking the arrow button on the AP location widget. (or) Location API is hit directly through postman with bearer token. 2. From the API endpoint (api/v1/location?_sn=ZYXECA000114), query params such as _sn (serial_number), id (device_id), and _dsl_id (provision_key) are used to query the Cassandra DB to fetch the MAC address (id) using the respective serial_number or provision_key For Example: 3. After fetching the MAC address, check for existing location details of that device in the Postgres DB, If Location Details Exist: i) Get the timestamp and TTL from Postgres DB. ii) Check if the timestamp is greater than TTL, if yes, Google Geolocation API is called, and the response received from Google API is updated in the Postgres DB with TTL and the response is sent. iii) If the timestamp is within TTL, the Postgres DB is not updated, and the existing details are sent as response. Else: i) Get the ermneighbors of the respective MAC address. ii) Ermneighbors may exist for radio_24 or radio_25 of the device or may not exist at all. iii) Pass the ermneighors to Google API if exists. Else the pass the MAC address alone. iv) Insert the Location Details into Postgres DB and Send the response received from Google API. db_schema.sql: 4. If force flag is Enabled, Google API is called. ie., api/v1/location?_sn=ZYXECA000114&force=true. Location Details are updated with TTL. Test doc link:**
###Human:tell me about Location service ###assistant: Repository: Product / astral_geolocation · GitLab (embedur.local) Services being used for Astral Geolocation: New Service: Location Service Minio Service Changes to Existing Services: Rest Web Db_Schema manager Flow: Algorithm: 1. Location API is hit through web by clicking the arrow button on the AP location widget. (or) Location API is hit directly through postman with bearer token. 2. From the API endpoint (api/v1/location?_sn=ZYXECA000114), query params such as _sn (serial_number), id (device_id), and _dsl_id (provision_key) are used to query the Cassandra DB to fetch the MAC address (id) using the respective serial_number or provision_key For Example: 3. After fetching the MAC address, check for existing location details of that device in the Postgres DB, If Location Details Exist: i) Get the timestamp and TTL from Postgres DB. ii) Check if the timestamp is greater than TTL, if yes, Google Geolocation API is called, and the response received from Google API is updated in the Postgres DB with TTL and the response is sent. iii) If the timestamp is within TTL, the Postgres DB is not updated, and the existing details are sent as response. Else: i) Get the ermneighbors of the respective MAC address. ii) Ermneighbors may exist for radio_24 or radio_25 of the device or may not exist at all. iii) Pass the ermneighors to Google API if exists. Else the pass the MAC address alone. iv) Insert the Location Details into Postgres DB and Send the response received from Google API. db_schema.sql: 4. If force flag is Enabled, Google API is called. ie., api/v1/location?_sn=ZYXECA000114&force=true. Location Details are updated with TTL. Test doc link:**
Custom Usecases
Usecases Requirement Domain BackBone Model Hardware Required Status Owner Gitlab Link Custom Wake Word Detection Audio Keras Model 1. RPI/ Arduino nanon2. Mic In Progress Driver Distraction System Vision InceptionResNet and Yolov8 Face 1. RPIn2. PI Camera In Progress Face Matching System Vision Haarcascade and Inception v3 1. RPIn2. PI Camera In Progress CSI based person detection Vision Mobilenet 1. ESP32 (2) In Progress Transcript generation Audio Wav2Vec / Whisper 1. RPIn2. Mic In Progress License Plate recognition Vision Resnet 1. RPIn2. PI Camera To-Do Audio DeNoising Audio Facebook Denoiser 1. RPIn2. Mic To-Do Obstacle Detection (Self Driving) Vision Resnet 1.RPIn2. PI Camera To-Do Brain Tumor Detection Vision Resnet 1. RPI To-Do Person Segmentaion on YolACT (480 x 480) Vision YolACT 1. RPI / SR110 / STMn2. PI Camera In Progress *
###Human:tell me about Custom Usecases ###assistant: Usecases Requirement Domain BackBone Model Hardware Required Status Owner Gitlab Link Custom Wake Word Detection Audio Keras Model 1. RPI/ Arduino nanon2. Mic In Progress Driver Distraction System Vision InceptionResNet and Yolov8 Face 1. RPIn2. PI Camera In Progress Face Matching System Vision Haarcascade and Inception v3 1. RPIn2. PI Camera In Progress CSI based person detection Vision Mobilenet 1. ESP32 (2) In Progress Transcript generation Audio Wav2Vec / Whisper 1. RPIn2. Mic In Progress License Plate recognition Vision Resnet 1. RPIn2. PI Camera To-Do Audio DeNoising Audio Facebook Denoiser 1. RPIn2. Mic To-Do Obstacle Detection (Self Driving) Vision Resnet 1.RPIn2. PI Camera To-Do Brain Tumor Detection Vision Resnet 1. RPI To-Do Person Segmentaion on YolACT (480 x 480) Vision YolACT 1. RPI / SR110 / STMn2. PI Camera In Progress *
Weekly Status Report - 23rd May
Executive Summary: Model Zoo: Worked on restructuring the UX for the first phase of the website and the corresponding changes. Adding support for other ultralytics models in the CLI tool and added support for live video detections in RPI. Added support for model architecture visualization in the CLI tool. Added the level 0 details of the models from the excel sheet to the DB. AI Explorations: Driver Distraction Detection System: Looked into and identified better models like Inception V3 and implemented the same. Tested on static video prediction and obtained the results. Speech Recognition: Explored on Wav2Vec model and tested for live speech recognition and static transcript generation. Worked on the Whisper Tiny model for the live speech recognition and tested on the same. Explored on the facebook denoiser model and implemented a simple noise removal from an audio file. Transformers and OpenELM: Explored and understood the usecases and the working on transformers in language models. Understood the working of SWin-transformers and documented the same with a simple implementation. Working on understanding the Apple OpenELM model and its usecases. CSI: Worked on using a neural network to train the CSI based data to support tiny devices. Working on motion detection to detect movement in an environment. Accomplishments: Dev: AI-337: Inclusion of Level 0 details of the collected models in the DB – ==Closed.== AI-341: ML - Perf Report Generation – ==Closed.== AI-342 : Speech recognition - Arduino Nano BLE – ==Closed.== AI-340: Retraining of YOLACT model for 480x480 image size – ==Closed.== AI-353: Exploration on Apple OpenELM – ==Closed.== AI-324: Driver Distraction Detection system - Identify model and test on RPI – ==Closed.== AI-336: Explore on Transformers and Swin Transformers – ==In Progress.== AI-269: Profiling and Comparing metrics , Registering a model – ==In Progress.== AI-275: Base wrapper development – ==In Progress.== AI-350: Speech Recognition - Whisper Model Explorations – ==In Progress.== AI-326: Speech Recognition - Wav2vec2 model – ==In Progress.== AI-339 : CSI data collection and model training and testing – ==In Progress.== AI-349 Yolo Model - Enhancement – ==In Progress.== AI-348 Documentation for CLI Tool - ==In Progress.== AI-328: Model Zoo Site with landing page functionalities – ==In Progress.== DevOps: AI-345: Install SSL certs for ml-dev server – ==Closed.== AI-332: MLflow and DVC Integration - ==Closed.== AI-346: Test Instance in south region for latency issues - ==Closed.== AI-351: CI/CD Implementation for SDK – ==In Progress.== AI-352: Test artifacts pusing in upgraded gitlab test machine - ==In Progress.== AI-354: Configure Grafana dashboards for GPU- ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Modifications and finalizing the UX and the related UI changes for the website. Collection of details for the steps to execute the model. Addition of download of models feature to the CLI tool and testing of pytorch models. Packaging of the CLI tool setup. Curating the document for the setup and the working on the CLI tool. AI Explorations: Driver Distraction Detection System: Support for live prediction of the system. Support for prediction using raspberry Pi. Speech Recognition: Experiment the Whisper models with mic and RPI. Work on trying if modifications could be made to do custom tasks based on the speech recognised in python version of the model Transformers and OpenELM: Further explorations and implementations using SWin Transformers for image classification. Exploration and simple implementation using Apple OpenELM. CSI: Explore on detailed neural networks and motion detection in CSI. PTO and Vacation: Keerthana - 24th May Kaushik - 24th May DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-345: Install SSL certs for ml-dev server – ==Closed.== Created a wildcard domain .ml-dev.int.embedur.com Deployed caddy-proxy for automatic certificate obtainment and renewal. Application is https secure for access. [JENANI SAMPATH] AI-332: MLflow and DVC Integration - ==Closed.== Created scripts in tinyml repo for the mlflow and dvc integration for dev usage. Created Wiki documentation on process and workflow. [JENANI SAMPATH] AI-346: Test Instance in south region for latency issues - ==Closed.== Created a test instance in south region similar to ml-dev. Deployed the frontend and backend application but backing up postgres data. Tested the API issue seems to persist. Raised issue to it back to dev. [SURAJRAM SRINIVASAN] AI-337: Inclusion of Level 0 details of the collected models in the DB – ==Closed.== Collected and cleaned up the Level0 details from the excel sheet. Loaded the details with the github and the reserach paper link into the DB of the dev server. Added the details in a separate table. Working on including the details in the master table. [ANEES ASLAM] AI-341:ML - Perf Report Generation – ==Closed.== Completed, shared Documentation and “ipynb” Notebook with steps. Next Steps would require registration on mlcommons platform. [KEERTHANA SUNDARBABU] AI-342 : Speech recognition - Arduino Nano BLEn – ==Closed.== Ran Compiled and Deployed the micro-speech example on the Arduino Nano BLE device Made minor changes on the functionality to trigger a function based on whether the recognized word is "yes" or "no". A simple function that prints "Your command is Yes/No" based on the command recognized. [JAYAPRAKASH MASILAMANI] AI-340: Retraining of YOLACT model for 480x480 image size – ==Closed.== Tried yolact_minimal repository for retraining using COCO person dataset for 480 image size. Installed torch v1.1 and torchvision v0.3.0 after spinning a virtual environment. Set up the CPP dependency drivers and implemented the retraining for 480 image size. Faced an issue while training. Solved the issue and saved the model. [KAUSHIK VISHAL SARAVANAN] AI-353: Exploration on Apple OpenELMn – ==Closed.== Explored on AppleELM and logged all the findings in WIKI. Successfully ran the model with different parameters locally. [SHAIK SHUAIB AFNAN] AI-324: Driver Distraction Detection system - Identify model and test on RPI – ==Closed.== Implemented the driver distraction system using inceptionResnetV2 model as efficientNetB7 wasn’t performing well on new data and inmceptionResnetV2 is lighter than efficientNetB7 and achieved accuracy of 86% on 10 epochs compared to efficeintNetB7 which got 80% on same number of epochs. Wrote script to pass static video as input into the model and get predictions for each frame and save the predicted video. Recorded a live video in car and passed it as static video and got predictions. Converted the inceptionresnetv2 model into tflite and quantized tflite(int8) models. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-351: CI/CD Implementation for SDK – ==In Progress.== Created basic template for CI/CD. Need to proceed with the scripts required for execution [ETA - 27/05 ] [JENANI SAMPATH] AI-352: Test artifacts pusing in upgraded gitlab test machine - ==In Progress.== Setting up the test gitlab machine in my local Configured the nginx.conf to upload higher client body max size. Setup the runner for gitlabtest in synaptics runner server. [ETA - 30/05 ] [JENANI SAMPATH] AI-354: Configure Grafana dashboards for GPU- ==In Progress.== Debugging template init failed issue for already configured dashboards. Work on setting up new dashboards. [ETA - 30/05 ] [SURAJRAM SRINIVASAN] AI-336: Explore on Transformers and Swin Transformers – ==In Progress.== Explored on Transformers for computer vision tasks. Understood the working on Vision Transformers and SWin Transformers. Working on a simple image classification task using SWin Transformers. [ETA - 27/05 ] [SURAJRAM SRINIVASAN] AI-269: Profiling and Comparing metrics , Registering a model – ==In Progress.== Completed the code for the model profiling and comparing metrics. Added code for registering a model using MLFlow APIs Working on testing the model registration and its use in model lifecycle part. [ETA - 27/05 ] [ANEES ASLAM] AI-275: Base wrapper development – ==In Progress.== Completed script to download Existing AI Model from S3 Bucket and convert into to Binary/TFLM Optimized for Target Hardware. [KEERTHANA SUNDARBABU] AI-350: Speech Recognition - Whisper Model Explorationsn – ==In Progress.== Implemented speech to text from audio file using faster whisper model from open AI Faster whisper model is comparatively faster as compared to wave-to-vec as well as open-ai models , but not compatible for live prediction unless finetuned. Whereas open AI's whisper.cpp version model was working fine with live speech-to-text as well as passing a .wav file. Tried the whisper.cpp model on the raspberry pi 4b+ , the model works fine wit recorded wav file Due to the unavailability of mic , yet to check for live speech to text in raspberry pi Looking for ways to quantize the tiny.en model and run on raspberry pi more efficiently [ETA - 27/05] [JAYAPRAKASH MASILAMANI] AI-326: Speech Recognition - Wav2vec2 model – ==In Progress.== Implemented the Speech recognition using Pytorch WAV2VEC2_ASR_BASE_960H model where audio files without background noise are transcribed with almost perfect accuracy. Leveraged the opensource wav2vec2-live module and implemented the real time speech recognition using facebook/wav2vec2-large-960h-lv60-self model. The model was able to detect the speech and recognize the words, but the word error rate for certain words seems to be an issue. Due to the unavailability of mic , yet to check for live speech to text in raspberry pi. Curated the details on Model Architecture and documented in a wiki. [ETA - 27/05 ] [SWATHI RAJAMANI] AI-328: Model Zoo Site with landing page functionalities – ==In Progress.== Working on landing page enhancements Added animations, auto scrolls [ETA - 30/05 ] [KAUSHIK VISHAL SARAVANAN] AI-339 : CSI data collection and model training and testingn – ==In Progress.== Working on training and testing a neural network model from the data obtained from the 3 rooms. Working on generating spectrogram images for the dataset and going to use that as training and testing data for a ResNet model. Working on gesture detection using the CSI data. [ETA - 30/05 ] [SANJAY RAVI] AI-348: Documentation for CLI Tool – ==In Progress.== Need to update the document for new feature enhancement. [ETA - 24/05] [SANJAY RAVI] AI-349 Yolo Model - Enhancement – ==In Progress.== Added Support on all ultralytic’s model (SAM, FASTSAM, NAS, MOBILESAM, YOLOWORLD) Need to test the model prediction and validation functionality for all ultralytics’s models Completed the live video predictions in RPI Completed model visualization for YOLO MODELS [ETA - 24/05]
###Human:tell me about Weekly Status Report - 23rd May ###assistant: Executive Summary: Model Zoo: Worked on restructuring the UX for the first phase of the website and the corresponding changes. Adding support for other ultralytics models in the CLI tool and added support for live video detections in RPI. Added support for model architecture visualization in the CLI tool. Added the level 0 details of the models from the excel sheet to the DB. AI Explorations: Driver Distraction Detection System: Looked into and identified better models like Inception V3 and implemented the same. Tested on static video prediction and obtained the results. Speech Recognition: Explored on Wav2Vec model and tested for live speech recognition and static transcript generation. Worked on the Whisper Tiny model for the live speech recognition and tested on the same. Explored on the facebook denoiser model and implemented a simple noise removal from an audio file. Transformers and OpenELM: Explored and understood the usecases and the working on transformers in language models. Understood the working of SWin-transformers and documented the same with a simple implementation. Working on understanding the Apple OpenELM model and its usecases. CSI: Worked on using a neural network to train the CSI based data to support tiny devices. Working on motion detection to detect movement in an environment. Accomplishments: Dev: AI-337: Inclusion of Level 0 details of the collected models in the DB – ==Closed.== AI-341: ML - Perf Report Generation – ==Closed.== AI-342 : Speech recognition - Arduino Nano BLE – ==Closed.== AI-340: Retraining of YOLACT model for 480x480 image size – ==Closed.== AI-353: Exploration on Apple OpenELM – ==Closed.== AI-324: Driver Distraction Detection system - Identify model and test on RPI – ==Closed.== AI-336: Explore on Transformers and Swin Transformers – ==In Progress.== AI-269: Profiling and Comparing metrics , Registering a model – ==In Progress.== AI-275: Base wrapper development – ==In Progress.== AI-350: Speech Recognition - Whisper Model Explorations – ==In Progress.== AI-326: Speech Recognition - Wav2vec2 model – ==In Progress.== AI-339 : CSI data collection and model training and testing – ==In Progress.== AI-349 Yolo Model - Enhancement – ==In Progress.== AI-348 Documentation for CLI Tool - ==In Progress.== AI-328: Model Zoo Site with landing page functionalities – ==In Progress.== DevOps: AI-345: Install SSL certs for ml-dev server – ==Closed.== AI-332: MLflow and DVC Integration - ==Closed.== AI-346: Test Instance in south region for latency issues - ==Closed.== AI-351: CI/CD Implementation for SDK – ==In Progress.== AI-352: Test artifacts pusing in upgraded gitlab test machine - ==In Progress.== AI-354: Configure Grafana dashboards for GPU- ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Modifications and finalizing the UX and the related UI changes for the website. Collection of details for the steps to execute the model. Addition of download of models feature to the CLI tool and testing of pytorch models. Packaging of the CLI tool setup. Curating the document for the setup and the working on the CLI tool. AI Explorations: Driver Distraction Detection System: Support for live prediction of the system. Support for prediction using raspberry Pi. Speech Recognition: Experiment the Whisper models with mic and RPI. Work on trying if modifications could be made to do custom tasks based on the speech recognised in python version of the model Transformers and OpenELM: Further explorations and implementations using SWin Transformers for image classification. Exploration and simple implementation using Apple OpenELM. CSI: Explore on detailed neural networks and motion detection in CSI. PTO and Vacation: Keerthana - 24th May Kaushik - 24th May DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-345: Install SSL certs for ml-dev server – ==Closed.== Created a wildcard domain .ml-dev.int.embedur.com Deployed caddy-proxy for automatic certificate obtainment and renewal. Application is https secure for access. [JENANI SAMPATH] AI-332: MLflow and DVC Integration - ==Closed.== Created scripts in tinyml repo for the mlflow and dvc integration for dev usage. Created Wiki documentation on process and workflow. [JENANI SAMPATH] AI-346: Test Instance in south region for latency issues - ==Closed.== Created a test instance in south region similar to ml-dev. Deployed the frontend and backend application but backing up postgres data. Tested the API issue seems to persist. Raised issue to it back to dev. [SURAJRAM SRINIVASAN] AI-337: Inclusion of Level 0 details of the collected models in the DB – ==Closed.== Collected and cleaned up the Level0 details from the excel sheet. Loaded the details with the github and the reserach paper link into the DB of the dev server. Added the details in a separate table. Working on including the details in the master table. [ANEES ASLAM] AI-341:ML - Perf Report Generation – ==Closed.== Completed, shared Documentation and “ipynb” Notebook with steps. Next Steps would require registration on mlcommons platform. [KEERTHANA SUNDARBABU] AI-342 : Speech recognition - Arduino Nano BLEn – ==Closed.== Ran Compiled and Deployed the micro-speech example on the Arduino Nano BLE device Made minor changes on the functionality to trigger a function based on whether the recognized word is "yes" or "no". A simple function that prints "Your command is Yes/No" based on the command recognized. [JAYAPRAKASH MASILAMANI] AI-340: Retraining of YOLACT model for 480x480 image size – ==Closed.== Tried yolact_minimal repository for retraining using COCO person dataset for 480 image size. Installed torch v1.1 and torchvision v0.3.0 after spinning a virtual environment. Set up the CPP dependency drivers and implemented the retraining for 480 image size. Faced an issue while training. Solved the issue and saved the model. [KAUSHIK VISHAL SARAVANAN] AI-353: Exploration on Apple OpenELMn – ==Closed.== Explored on AppleELM and logged all the findings in WIKI. Successfully ran the model with different parameters locally. [SHAIK SHUAIB AFNAN] AI-324: Driver Distraction Detection system - Identify model and test on RPI – ==Closed.== Implemented the driver distraction system using inceptionResnetV2 model as efficientNetB7 wasn’t performing well on new data and inmceptionResnetV2 is lighter than efficientNetB7 and achieved accuracy of 86% on 10 epochs compared to efficeintNetB7 which got 80% on same number of epochs. Wrote script to pass static video as input into the model and get predictions for each frame and save the predicted video. Recorded a live video in car and passed it as static video and got predictions. Converted the inceptionresnetv2 model into tflite and quantized tflite(int8) models. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-351: CI/CD Implementation for SDK – ==In Progress.== Created basic template for CI/CD. Need to proceed with the scripts required for execution [ETA - 27/05 ] [JENANI SAMPATH] AI-352: Test artifacts pusing in upgraded gitlab test machine - ==In Progress.== Setting up the test gitlab machine in my local Configured the nginx.conf to upload higher client body max size. Setup the runner for gitlabtest in synaptics runner server. [ETA - 30/05 ] [JENANI SAMPATH] AI-354: Configure Grafana dashboards for GPU- ==In Progress.== Debugging template init failed issue for already configured dashboards. Work on setting up new dashboards. [ETA - 30/05 ] [SURAJRAM SRINIVASAN] AI-336: Explore on Transformers and Swin Transformers – ==In Progress.== Explored on Transformers for computer vision tasks. Understood the working on Vision Transformers and SWin Transformers. Working on a simple image classification task using SWin Transformers. [ETA - 27/05 ] [SURAJRAM SRINIVASAN] AI-269: Profiling and Comparing metrics , Registering a model – ==In Progress.== Completed the code for the model profiling and comparing metrics. Added code for registering a model using MLFlow APIs Working on testing the model registration and its use in model lifecycle part. [ETA - 27/05 ] [ANEES ASLAM] AI-275: Base wrapper development – ==In Progress.== Completed script to download Existing AI Model from S3 Bucket and convert into to Binary/TFLM Optimized for Target Hardware. [KEERTHANA SUNDARBABU] AI-350: Speech Recognition - Whisper Model Explorationsn – ==In Progress.== Implemented speech to text from audio file using faster whisper model from open AI Faster whisper model is comparatively faster as compared to wave-to-vec as well as open-ai models , but not compatible for live prediction unless finetuned. Whereas open AI's whisper.cpp version model was working fine with live speech-to-text as well as passing a .wav file. Tried the whisper.cpp model on the raspberry pi 4b+ , the model works fine wit recorded wav file Due to the unavailability of mic , yet to check for live speech to text in raspberry pi Looking for ways to quantize the tiny.en model and run on raspberry pi more efficiently [ETA - 27/05] [JAYAPRAKASH MASILAMANI] AI-326: Speech Recognition - Wav2vec2 model – ==In Progress.== Implemented the Speech recognition using Pytorch WAV2VEC2_ASR_BASE_960H model where audio files without background noise are transcribed with almost perfect accuracy. Leveraged the opensource wav2vec2-live module and implemented the real time speech recognition using facebook/wav2vec2-large-960h-lv60-self model. The model was able to detect the speech and recognize the words, but the word error rate for certain words seems to be an issue. Due to the unavailability of mic , yet to check for live speech to text in raspberry pi. Curated the details on Model Architecture and documented in a wiki. [ETA - 27/05 ] [SWATHI RAJAMANI] AI-328: Model Zoo Site with landing page functionalities – ==In Progress.== Working on landing page enhancements Added animations, auto scrolls [ETA - 30/05 ] [KAUSHIK VISHAL SARAVANAN] AI-339 : CSI data collection and model training and testingn – ==In Progress.== Working on training and testing a neural network model from the data obtained from the 3 rooms. Working on generating spectrogram images for the dataset and going to use that as training and testing data for a ResNet model. Working on gesture detection using the CSI data. [ETA - 30/05 ] [SANJAY RAVI] AI-348: Documentation for CLI Tool – ==In Progress.== Need to update the document for new feature enhancement. [ETA - 24/05] [SANJAY RAVI] AI-349 Yolo Model - Enhancement – ==In Progress.== Added Support on all ultralytic’s model (SAM, FASTSAM, NAS, MOBILESAM, YOLOWORLD) Need to test the model prediction and validation functionality for all ultralytics’s models Completed the live video predictions in RPI Completed model visualization for YOLO MODELS [ETA - 24/05]
Model ZOO APIS
API design LANDING PAGE Use-case Listing API Explore Page Filters Listing API
###Human:tell me about Model ZOO APIS ###assistant: API design LANDING PAGE Use-case Listing API Explore Page Filters Listing API
apis
API design LANDING PAGE Use-case Listing API Explore Page Filters Listing API
###Human:tell me about apis ###assistant: API design LANDING PAGE Use-case Listing API Explore Page Filters Listing API
MODEL ZOO - APIs
APIs List List all models. Filter models by Supported devices. Supported chipsets. Domain (Image Segmentation and Detection, Pose Estimation, Face Recognition) Generation-based (Image, text) [Fix the types after discussion with AI team] Tag-based (scenarios) Model Details Model name. Model description. Supported Devices & chipsets. Applicable scenarios (Tags) Metrics Licenses Model Metrics TBD Code repository (Open source) Download model As protobuf As H5/HDF5 (Tensorflow) As tflite Get All Filters Database Design API design List all models Filter models by Supported devices Supported chipsets Domain (Image Segmentation and Detection, Pose Estimation, Face Recognition) Generation-based (Image, text) [Fix the types after discussion with AI team] Tag-based (scenarios) Combination filters Model Details Model name Model description Supported Devices & chipsets Applicable scenarios (Tags) Metrics Licenses Code repository (Open source) Download model As protobuf As H5/HDF5 (Tensorflow) As tflite Get All Filters Insert API
###Human:tell me about MODEL ZOO - APIs ###assistant: APIs List List all models. Filter models by Supported devices. Supported chipsets. Domain (Image Segmentation and Detection, Pose Estimation, Face Recognition) Generation-based (Image, text) [Fix the types after discussion with AI team] Tag-based (scenarios) Model Details Model name. Model description. Supported Devices & chipsets. Applicable scenarios (Tags) Metrics Licenses Model Metrics TBD Code repository (Open source) Download model As protobuf As H5/HDF5 (Tensorflow) As tflite Get All Filters Database Design API design List all models Filter models by Supported devices Supported chipsets Domain (Image Segmentation and Detection, Pose Estimation, Face Recognition) Generation-based (Image, text) [Fix the types after discussion with AI team] Tag-based (scenarios) Combination filters Model Details Model name Model description Supported Devices & chipsets Applicable scenarios (Tags) Metrics Licenses Code repository (Open source) Download model As protobuf As H5/HDF5 (Tensorflow) As tflite Get All Filters Insert API
Status - Week ending on 24/05
*Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: Gitlab production upgrade was completed successfully from 13.8.8 to 13.12.5 A set of protocols were established and documented for GET item from SharePoint to facilitate face recognition project. A new Gitlab User was created and Configured in Gitlab as an integration. This is triggered for every MR in the product group to endorse mentions in Gitlab commits. The IP Tables seems to be incohesive with the ports assigned. Making it unable to resolving ports with direct IP address. Reconfigured the IP tables Manually and Rebound the updated port mappings with the docker daemon. Internal Environments - QA/Dev: Now airtel-dev2 domain has transitioned to airtel-dev2.int.embedur.com, R53, certs and nginx has been configured for the same. Singtel Production: Communication issue between Huawei and VMware CRS nodes has been resolved. Data Migration activity can be started once penetration testing is completed. Cronjobs have been enabled for all the push-notification and runtime streaming topics from our end. Services pertaining to ELK stack deployment in VMware Singtel Production is in progress. Raised CSR for 2 domains *(staging-wifi-analytics.singnet.com.sg, staging-wifi-analytics.sg.singtelgroup.net) in VMware Staging environment. Airtel Production: For around 1k Zyxel devices the firmware upgrade has been done on 18-5-24 for fixing the high interference value issue - taken the interference value before & after the firmware upgrade for comparison & also the Neighbour data to correlate with interference values. Taken the CPU temperature for the same 1k devices from ORC data before and after firmware upgrade. Airtel Production Cloud has been upgraded to 2.9.1 Taken the necessary data from radio_5, radio_24 & Accesspoint of ORC files during the peak hours and plotted the graph with interference value fetched from the field "erm.current_channel_usage.idle"n Tickets information: Closed Tickets Description DEVOPS-1152 Domain Migration airtel-dev2 DEVOPS-1153 JIRA GitLab Bot DEVOPS-1154 GET item from sharepoint DEVOPS-1156 Diagnose Network Reachability via IP (Dev Servers) AIR-3698 Self Healing Summary reports using PySpark Job DEVOPS-1157 Upgrade gitlab from 13.8.8 to 13.12.15 in Test machine DEVOPS-1158 Upgrade gitlab from 13.8.8 to 13.12.15 in Prod machine DEVOPS-1155 Upgrade Airtel Production to 2.9.1 AIR-3705 Fetch the CPU temperature for trail devices AIR-3704 Fetch Neighbour data & interference value for trail devices AIR-3706 Create graphs with interference values for ZTE devices In-Progress Tickets Description ETA AI-275 Wrapper for deployment in device 01/06/2024 SMN-718 Data Migration Setup - Production environment 31/05/2024 SMN-739 Consume Kafka messages from VMware Prod 31/05/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 31/05/2024 SMN-743 Resurrect ELK stack in VMware Singtel Prod 31/05/2024 PTO & Vacation: Preeta - 23/05/2024 (Half day) Plan for next week: Flashing Firmware To focus is on developing a C++ firmware flashing library for a tiny device, alongside investigating alternative methods for flashing devices. Testing and documentation findings will be pivotal to ensure the reliability of the solutions crafted. Recommendation Service To develop a model for recommendation services, we will begin by defining the response and finalizing the service architecture. Next, we will make plans for integrating machine learning models with the service and identify the required parameters. Finally, we will finalize the model, response, and input JSON format. Pyspark-Fingerprint An automated process will be implemented using a cron job. It will run a Spark script every hour to collect the client data with duplicates removed. Configure another cron job to run a Spark script every 6 hours. This will process the per hour file created in the previous step - This shall increase the efficiency by removing additional duplicates and considerable reduction in the ORC file size. Self-Healing Weekly report To add the number of devices rebooted 10 times a day and the same devices which got repeated at least 3 time a week in the weekly summary report.n ELK Stack Deployment and Multi domain Cert generation Ensure that ELK stack functionality works as expected in VMware Production environment. Also validate multi domain cert functionality for Singtel staging environment, once customer provides us the certs.n Cassandra Data Migration Start Cassandra Data Migration activity, once penetration testing has been done. Also, ensure that customer can consume messages from Kafka brokers in VMware Production environment for all the topics.n Detailed Summary: Gitlab Upgrade Validation was conducted following the backup and restoration process on the test machine. The production machine underwent an upgrade to version 13.12.15, and both pre and post-validation procedures were carried out, subsequently updating the Wiki. Azure Graph API Protocols were established and documented for retrieving items from SharePoint to fetch private items with appropriate Azure Application. This supported a face recognition (Attendance System) project initiative with established protocols. GitLab Integration A new GitLab user was created and configured as an integration for endorsing mentions in GitLab commits triggered by merge requests within the product group. Network Reachability (cloud-devel) There were issues with IP tables lacking coherence with assigned ports, resulting in an inability to resolve ports with direct IP addresses. This was addressed by manual reconfiguration of the IP tables and subsequent rebinding of updated port mappings with the Docker daemon. Self Healing Reports - PySpark The application analyzes the data to identify devices experiencing the use cases (CPU temperature, usage , Memory usage etc) repeatedly over a specific timeframe (e.g., more than 3 days). The application utilizes the extracted data to generate summary reports. These reports likely include: Device ID Repeated Issue (e.g., CPU Temperature exceeding threshold) Duration of the issue (e.g., "More than 3 days") Additional relevant information (may vary) This is completed and validated with the regular reports provided every Monday. Zyxel Trail devices Taken the interference values from the orc files - from 17-05-24 to 18-05-24 (before firmware upgrade) and from 19-05-24 to 22-05-24 (after firmware upgrade). Also taken the Neighbour data using Rest APIs. Taken the CPU temperature for the devices before and after firmware upgrade for further analysis as requested by Airtel. Cloud Upgrade On 23-05-24, Airtel production cloud has been upgraded to 2.9.1 In this release we have addressed the Sercomm ONT high CPU temperature issue. Interference - ZTE Taken the necessary ORC data on 22-05-25 for the time interval from 2PM to 4PM and plotted the graph using a python script. The graph was plotted using the field "erm.current_channel_usage.idle" instead of "erm.current_channel_usage.interference ", since we have swapped the values in bridge service. Singtel VMware Prod Kafka Streaming Live Migration activity was performed from Huawei end to resolve CRS-5 and CRS-8 node reachability issues to VMware. Also, connection has been established between CRS nodes of Huawei and VMware by adding certain routes in some of the KFE servers. Cronjobs also has been enabled for all the topics to produce messages every minute. There seems to be issue from customer end in consuming Kafka messages. Continuously following up with Francheska regarding this. ELK Stack Deployment and Multi domain Cert generation Services pertaining to ELK stack has been deployed in VMware Production environment. Need to ensure functionality is working as expected. Created CSR for two of the domains *(staging-wifi-analytics.singnet.com.sg, staging-wifi-analytics.sg.singtelgroup.net)* in VMware Staging environment. Validate functionality once customer provides us the certs.
###Human:tell me about Status - Week ending on 24/05 ###assistant: *Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: Gitlab production upgrade was completed successfully from 13.8.8 to 13.12.5 A set of protocols were established and documented for GET item from SharePoint to facilitate face recognition project. A new Gitlab User was created and Configured in Gitlab as an integration. This is triggered for every MR in the product group to endorse mentions in Gitlab commits. The IP Tables seems to be incohesive with the ports assigned. Making it unable to resolving ports with direct IP address. Reconfigured the IP tables Manually and Rebound the updated port mappings with the docker daemon. Internal Environments - QA/Dev: Now airtel-dev2 domain has transitioned to airtel-dev2.int.embedur.com, R53, certs and nginx has been configured for the same. Singtel Production: Communication issue between Huawei and VMware CRS nodes has been resolved. Data Migration activity can be started once penetration testing is completed. Cronjobs have been enabled for all the push-notification and runtime streaming topics from our end. Services pertaining to ELK stack deployment in VMware Singtel Production is in progress. Raised CSR for 2 domains *(staging-wifi-analytics.singnet.com.sg, staging-wifi-analytics.sg.singtelgroup.net) in VMware Staging environment. Airtel Production: For around 1k Zyxel devices the firmware upgrade has been done on 18-5-24 for fixing the high interference value issue - taken the interference value before & after the firmware upgrade for comparison & also the Neighbour data to correlate with interference values. Taken the CPU temperature for the same 1k devices from ORC data before and after firmware upgrade. Airtel Production Cloud has been upgraded to 2.9.1 Taken the necessary data from radio_5, radio_24 & Accesspoint of ORC files during the peak hours and plotted the graph with interference value fetched from the field "erm.current_channel_usage.idle"n Tickets information: Closed Tickets Description DEVOPS-1152 Domain Migration airtel-dev2 DEVOPS-1153 JIRA GitLab Bot DEVOPS-1154 GET item from sharepoint DEVOPS-1156 Diagnose Network Reachability via IP (Dev Servers) AIR-3698 Self Healing Summary reports using PySpark Job DEVOPS-1157 Upgrade gitlab from 13.8.8 to 13.12.15 in Test machine DEVOPS-1158 Upgrade gitlab from 13.8.8 to 13.12.15 in Prod machine DEVOPS-1155 Upgrade Airtel Production to 2.9.1 AIR-3705 Fetch the CPU temperature for trail devices AIR-3704 Fetch Neighbour data & interference value for trail devices AIR-3706 Create graphs with interference values for ZTE devices In-Progress Tickets Description ETA AI-275 Wrapper for deployment in device 01/06/2024 SMN-718 Data Migration Setup - Production environment 31/05/2024 SMN-739 Consume Kafka messages from VMware Prod 31/05/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 31/05/2024 SMN-743 Resurrect ELK stack in VMware Singtel Prod 31/05/2024 PTO & Vacation: Preeta - 23/05/2024 (Half day) Plan for next week: Flashing Firmware To focus is on developing a C++ firmware flashing library for a tiny device, alongside investigating alternative methods for flashing devices. Testing and documentation findings will be pivotal to ensure the reliability of the solutions crafted. Recommendation Service To develop a model for recommendation services, we will begin by defining the response and finalizing the service architecture. Next, we will make plans for integrating machine learning models with the service and identify the required parameters. Finally, we will finalize the model, response, and input JSON format. Pyspark-Fingerprint An automated process will be implemented using a cron job. It will run a Spark script every hour to collect the client data with duplicates removed. Configure another cron job to run a Spark script every 6 hours. This will process the per hour file created in the previous step - This shall increase the efficiency by removing additional duplicates and considerable reduction in the ORC file size. Self-Healing Weekly report To add the number of devices rebooted 10 times a day and the same devices which got repeated at least 3 time a week in the weekly summary report.n ELK Stack Deployment and Multi domain Cert generation Ensure that ELK stack functionality works as expected in VMware Production environment. Also validate multi domain cert functionality for Singtel staging environment, once customer provides us the certs.n Cassandra Data Migration Start Cassandra Data Migration activity, once penetration testing has been done. Also, ensure that customer can consume messages from Kafka brokers in VMware Production environment for all the topics.n Detailed Summary: Gitlab Upgrade Validation was conducted following the backup and restoration process on the test machine. The production machine underwent an upgrade to version 13.12.15, and both pre and post-validation procedures were carried out, subsequently updating the Wiki. Azure Graph API Protocols were established and documented for retrieving items from SharePoint to fetch private items with appropriate Azure Application. This supported a face recognition (Attendance System) project initiative with established protocols. GitLab Integration A new GitLab user was created and configured as an integration for endorsing mentions in GitLab commits triggered by merge requests within the product group. Network Reachability (cloud-devel) There were issues with IP tables lacking coherence with assigned ports, resulting in an inability to resolve ports with direct IP addresses. This was addressed by manual reconfiguration of the IP tables and subsequent rebinding of updated port mappings with the Docker daemon. Self Healing Reports - PySpark The application analyzes the data to identify devices experiencing the use cases (CPU temperature, usage , Memory usage etc) repeatedly over a specific timeframe (e.g., more than 3 days). The application utilizes the extracted data to generate summary reports. These reports likely include: Device ID Repeated Issue (e.g., CPU Temperature exceeding threshold) Duration of the issue (e.g., "More than 3 days") Additional relevant information (may vary) This is completed and validated with the regular reports provided every Monday. Zyxel Trail devices Taken the interference values from the orc files - from 17-05-24 to 18-05-24 (before firmware upgrade) and from 19-05-24 to 22-05-24 (after firmware upgrade). Also taken the Neighbour data using Rest APIs. Taken the CPU temperature for the devices before and after firmware upgrade for further analysis as requested by Airtel. Cloud Upgrade On 23-05-24, Airtel production cloud has been upgraded to 2.9.1 In this release we have addressed the Sercomm ONT high CPU temperature issue. Interference - ZTE Taken the necessary ORC data on 22-05-25 for the time interval from 2PM to 4PM and plotted the graph using a python script. The graph was plotted using the field "erm.current_channel_usage.idle" instead of "erm.current_channel_usage.interference ", since we have swapped the values in bridge service. Singtel VMware Prod Kafka Streaming Live Migration activity was performed from Huawei end to resolve CRS-5 and CRS-8 node reachability issues to VMware. Also, connection has been established between CRS nodes of Huawei and VMware by adding certain routes in some of the KFE servers. Cronjobs also has been enabled for all the topics to produce messages every minute. There seems to be issue from customer end in consuming Kafka messages. Continuously following up with Francheska regarding this. ELK Stack Deployment and Multi domain Cert generation Services pertaining to ELK stack has been deployed in VMware Production environment. Need to ensure functionality is working as expected. Created CSR for two of the domains *(staging-wifi-analytics.singnet.com.sg, staging-wifi-analytics.sg.singtelgroup.net)* in VMware Staging environment. Validate functionality once customer provides us the certs.
MLPerf Edge
Prerequisites Hardware Requirements Ubuntu Environment Disk Space - 3 Gb (For Classification Model) Software Requirements Python Version 3.10.x Clone this github repository from MLCOMMONS pip install opencv-python pip install opencv-python-headless pip install matplotlib pip install pycocotools pip install pybind11 pip install tensorflow or pip install tensorflow-gpu pip install onnxruntime or pip install onnxruntime-gpu Steps To Implement The Edge Benchmark (Resnet50) Clone the above given repository. Go to vision → classification_and_detection Prepare the Imagenet2 dataset by downloading the dataset from the below given website and download the “Full size download“ dataset. Move the extracted dataset folder to the respective folder where the inference folders are siturated. Example : move the extracted dataset folder to : inference → tools → upscale_coco We have downloaded the Imagenet2 dataset but it is not scaled properly for the model to compute so it has to be scaled to 224 x 224 dimensions so the model can run properly. To achieve this create a .sh (bash) file with the below code and execute it. Now with the scaled images, we have to create a validation map for those image. To create this run the below .sh (bash) file in order to achieve this. Note : Make sure that the above two .sh files are in the “imagenette2“ folder. Download the model using the following command. Build and install the benchmark Set your model path using the following command Running the benchmark The output will be like Other Command Usage --mlperf_conf - the mlperf config file to use for rules compliant parameters, defaults to ../../mlperf.conf --user_conf - the user config file to use for user LoadGen settings such as target QPS, defaults to user.conf --dataset - use the specified dataset. Currently we only support ImageNet. --dataset-path - path to the dataset. --data-format {NCHW,NHWC} - data-format of the model (default: the backends prefered format). --scenario - {SingleStream,MultiStream,Server,Offline} comma separated list of benchmark modes. --profile {resnet50-tf,resnet50-onnxruntime,retinanet-onnxruntime,retinanet-pytorch,mobilenet-tf,mobilenet-onnxruntime,ssd-mobilenet-tf,ssd-mobilenet-onnxruntime,ssd-resnet34-tf,ssd-resnet34-onnxruntime,resnet50-tvm-onnx,resnet50-tvm-pytorch} - this fills in default command line options with the once specified in the profile. Command line options that follow may override the those. --model - MODEL the model file. --inputs - INPUTS comma separated input name list in case the model format does not provide the input names. This is needed for tensorflow since the graph does not specify the inputs. --outputs - OUTPUTS comma separated output name list in case the model format does not provide the output names. This is needed for tensorflow since the graph does not specify the outputs. --output - OUTPUT] location of the JSON output. --backend - BACKEND which backend to use. Currently supported is tensorflow, onnxruntime, pytorch and tflite. --threads - THREADS number of worker threads to use (default: the number of processors in the system). --count - COUNT Number of images the dataset we use (default: use all images in the dataset). --qps - QPS Expected QPS. --max-latency - MAX_LATENCY comma separated list of which latencies (in seconds) we try to reach in the 99 percentile (deault: 0.01,0.05,0.100). --max-batchsize -** MAX_BATCHSIZE maximum batchsize we generate to backend (default: 128).
###Human:tell me about MLPerf Edge ###assistant: Prerequisites Hardware Requirements Ubuntu Environment Disk Space - 3 Gb (For Classification Model) Software Requirements Python Version 3.10.x Clone this github repository from MLCOMMONS pip install opencv-python pip install opencv-python-headless pip install matplotlib pip install pycocotools pip install pybind11 pip install tensorflow or pip install tensorflow-gpu pip install onnxruntime or pip install onnxruntime-gpu Steps To Implement The Edge Benchmark (Resnet50) Clone the above given repository. Go to vision → classification_and_detection Prepare the Imagenet2 dataset by downloading the dataset from the below given website and download the “Full size download“ dataset. Move the extracted dataset folder to the respective folder where the inference folders are siturated. Example : move the extracted dataset folder to : inference → tools → upscale_coco We have downloaded the Imagenet2 dataset but it is not scaled properly for the model to compute so it has to be scaled to 224 x 224 dimensions so the model can run properly. To achieve this create a .sh (bash) file with the below code and execute it. Now with the scaled images, we have to create a validation map for those image. To create this run the below .sh (bash) file in order to achieve this. Note : Make sure that the above two .sh files are in the “imagenette2“ folder. Download the model using the following command. Build and install the benchmark Set your model path using the following command Running the benchmark The output will be like Other Command Usage --mlperf_conf - the mlperf config file to use for rules compliant parameters, defaults to ../../mlperf.conf --user_conf - the user config file to use for user LoadGen settings such as target QPS, defaults to user.conf --dataset - use the specified dataset. Currently we only support ImageNet. --dataset-path - path to the dataset. --data-format {NCHW,NHWC} - data-format of the model (default: the backends prefered format). --scenario - {SingleStream,MultiStream,Server,Offline} comma separated list of benchmark modes. --profile {resnet50-tf,resnet50-onnxruntime,retinanet-onnxruntime,retinanet-pytorch,mobilenet-tf,mobilenet-onnxruntime,ssd-mobilenet-tf,ssd-mobilenet-onnxruntime,ssd-resnet34-tf,ssd-resnet34-onnxruntime,resnet50-tvm-onnx,resnet50-tvm-pytorch} - this fills in default command line options with the once specified in the profile. Command line options that follow may override the those. --model - MODEL the model file. --inputs - INPUTS comma separated input name list in case the model format does not provide the input names. This is needed for tensorflow since the graph does not specify the inputs. --outputs - OUTPUTS comma separated output name list in case the model format does not provide the output names. This is needed for tensorflow since the graph does not specify the outputs. --output - OUTPUT] location of the JSON output. --backend - BACKEND which backend to use. Currently supported is tensorflow, onnxruntime, pytorch and tflite. --threads - THREADS number of worker threads to use (default: the number of processors in the system). --count - COUNT Number of images the dataset we use (default: use all images in the dataset). --qps - QPS Expected QPS. --max-latency - MAX_LATENCY comma separated list of which latencies (in seconds) we try to reach in the 99 percentile (deault: 0.01,0.05,0.100). --max-batchsize -** MAX_BATCHSIZE maximum batchsize we generate to backend (default: 128).
Whisper Model (OPENAI)
Architecture: Whisper architecture uses an encoder-decoder transformer, which as the name implies consists of an encoder and a decoder. The encoder obtains a latent representation from speech, and the decoder outputs text from the latent representation. Preprocessing: Audio files are handled in 16 kHz PCM format scaled to a range of -1 to 1 and frequency converted with an 80-channel Mel Spectrogram. The window size for the Mel Spectrogram conversion is 25 ms and the stride is 10 ms. The Mel Spectrogram is used in segments of 30 seconds each. Encoder: The encoder is run only once per 30-second segment to extract the latent representation from the spectrogram. First, it applies twice a convolution using GELU for activation with a filter size of 3 to compute input embedding. The second convolution has a stride of 2. The transformer performs position embedding using Sin function. Since the encoder is executed only once per 30-second segment, so the load is not very high. Decoder: The decoder outputs the probability of occurrence for each one of the 51865 tokens from the latent representation. The tokens are determined by performing a Greedy Search or Beam Search on the probability of occurrence of the tokens in the output. The beam search has a beam size (number of search branches) of 5. The decoder outputs a maximum of 224 tokens per 30-second segment, so it is executed up to 224 times. If two consecutive timestamp tokens appear in the decoding results across 224 inferences, the token sequence of speech recognition is chosen and output. Postprocessing:N For the chosen token sequence, GPT2TokenizerFast decodes it into text. The architecture is a byte-level BPE text tokenizer, which does not output words, but Unicode byte codes. The timestamp gives you the number of seconds of actual speech recognition that was done. Then cut out the unprocessed part of the Mel Spectrogram, add a Mel Spectrogram to make it 30 seconds, and repeat the decoding process again. Both encoder and decoder have the same transformer architecture. Keywords: Mel Spectogram A mel spectrogram combines the concept of a spectrogram with the mel scale. It calculates the spectrogram of the audio signal as usual. However, instead of using the linear frequency scale on the y-axis, it uses the mel scale. This results in a spectrogram where the frequency bins are spaced according to human auditory perception. Multi Layer Perceptron Simple Architecture: An MLP in a transformer typically consists of two fully-connected (dense) layers with an activation function in between. The activation function, often GELU (Gaussian Error Linear Unit) in transformers, adds non-linearity. Self Attention Layers In an encoder with self-attention layers, each element in the sequence attends to (focuses on) other relevant elements within the same sequence. This allows the model to capture long-range dependencies and understand the context of each audio segment. Cross Attention Layers In Whisper model, the decoder use cross-attention. The decoder takes a previously translated word (query) and attends to the encoded source sentence (keys and values) to generate the next translated word. It focuses on parts of the source language most relevant to the current translation step. ARCHITECTURAL DETAILS
###Human:tell me about Whisper Model (OPENAI) ###assistant: Architecture: Whisper architecture uses an encoder-decoder transformer, which as the name implies consists of an encoder and a decoder. The encoder obtains a latent representation from speech, and the decoder outputs text from the latent representation. Preprocessing: Audio files are handled in 16 kHz PCM format scaled to a range of -1 to 1 and frequency converted with an 80-channel Mel Spectrogram. The window size for the Mel Spectrogram conversion is 25 ms and the stride is 10 ms. The Mel Spectrogram is used in segments of 30 seconds each. Encoder: The encoder is run only once per 30-second segment to extract the latent representation from the spectrogram. First, it applies twice a convolution using GELU for activation with a filter size of 3 to compute input embedding. The second convolution has a stride of 2. The transformer performs position embedding using Sin function. Since the encoder is executed only once per 30-second segment, so the load is not very high. Decoder: The decoder outputs the probability of occurrence for each one of the 51865 tokens from the latent representation. The tokens are determined by performing a Greedy Search or Beam Search on the probability of occurrence of the tokens in the output. The beam search has a beam size (number of search branches) of 5. The decoder outputs a maximum of 224 tokens per 30-second segment, so it is executed up to 224 times. If two consecutive timestamp tokens appear in the decoding results across 224 inferences, the token sequence of speech recognition is chosen and output. Postprocessing:N For the chosen token sequence, GPT2TokenizerFast decodes it into text. The architecture is a byte-level BPE text tokenizer, which does not output words, but Unicode byte codes. The timestamp gives you the number of seconds of actual speech recognition that was done. Then cut out the unprocessed part of the Mel Spectrogram, add a Mel Spectrogram to make it 30 seconds, and repeat the decoding process again. Both encoder and decoder have the same transformer architecture. Keywords: Mel Spectogram A mel spectrogram combines the concept of a spectrogram with the mel scale. It calculates the spectrogram of the audio signal as usual. However, instead of using the linear frequency scale on the y-axis, it uses the mel scale. This results in a spectrogram where the frequency bins are spaced according to human auditory perception. Multi Layer Perceptron Simple Architecture: An MLP in a transformer typically consists of two fully-connected (dense) layers with an activation function in between. The activation function, often GELU (Gaussian Error Linear Unit) in transformers, adds non-linearity. Self Attention Layers In an encoder with self-attention layers, each element in the sequence attends to (focuses on) other relevant elements within the same sequence. This allows the model to capture long-range dependencies and understand the context of each audio segment. Cross Attention Layers In Whisper model, the decoder use cross-attention. The decoder takes a previously translated word (query) and attends to the encoded source sentence (keys and values) to generate the next translated word. It focuses on parts of the source language most relevant to the current translation step. ARCHITECTURAL DETAILS
Status Update(20th - 24th)
Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Vimal: !!!! Vishnu: !!!! Ganapathy: !!!! Thangavel: !!!! Jyotsna: !!!! Shri Ramki: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [VIMAL] AIR-3665: Create a policy and secret for new services – ==Completed.== Created policy and secret for new services and code pushed to the devops repo [VISHNU] AIR-3699: Change CPU temperature for Sercomm model – ==Completed.== Change the runtime value for CPU temperature based on the request from the Sercomm team. The expectation is cloud must reduce the temperature by 15°C if the CPU temperature is ≤ 112℃ and ≤ 15℃. Cloud must not change the temperature value if the CPU temperature is > 112℃. [VISHNU] AIR-3686 : Add reboot reasons rule in self healing schema – ==Completed.== add key erm_reboot_reasons_day_all in self-healing schema for tracking the reboot counts per day. [VISHNU] AIR-3648 : Modify the monitor mode service logic – ==Completed.== Add a new topic to be consumed from directly from Apisix. Add URL parsing logic from the HTTP log server service, and filter out only the GET calls. [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [VIMAL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] AIR-3651: Create DB/redis/Kafka connection & implement business logic for recommendation service – ==In Progress.== Establish a connection to the DB/redis/Kafka Subscribe to the topic that comes from the monitor mode service Once the monitor ack receives, this service will get the required fields value from runtime.journal with some timeframe and send it to the recommendation model. Once the model sends the response, we will update the cache with the recommended value, the status and timestamp. [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] AIR-3662: Create a DB connection and initialize server – ==In Progress.== Once the user hits API (api/v1/location?_sn=), a connection is established with Cassandra DB to fetch required data such as, id (MAC Address) and respective neighbors of the device. Established a connection with Postgres DB to insert the response received from Google API. [JYOSTNA] AIR-3664: Create postgres table – ==In Progress.== Created a table named geolocation with the necessary fields. Query for the same is as follows. CREATE TABLE IF NOT EXISTS geolocation (n id SERIAL PRIMARY KEY,n device_id TEXT NOT NULL,n customer TEXT NOT NULL,n latitude TEXT NOT NULL,n longitude TEXT NOT NULL,n accuracy TEXT NOT NULL,n map_url TEXT,n timestamp INT NOT NULL,n expires_at INT NOT NULLn); [SHRI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) ON HOLD: [VIMAL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SANJAY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!!
###Human:tell me about Status Update(20th - 24th) ###assistant: Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Vimal: !!!! Vishnu: !!!! Ganapathy: !!!! Thangavel: !!!! Jyotsna: !!!! Shri Ramki: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [VIMAL] AIR-3665: Create a policy and secret for new services – ==Completed.== Created policy and secret for new services and code pushed to the devops repo [VISHNU] AIR-3699: Change CPU temperature for Sercomm model – ==Completed.== Change the runtime value for CPU temperature based on the request from the Sercomm team. The expectation is cloud must reduce the temperature by 15°C if the CPU temperature is ≤ 112℃ and ≤ 15℃. Cloud must not change the temperature value if the CPU temperature is > 112℃. [VISHNU] AIR-3686 : Add reboot reasons rule in self healing schema – ==Completed.== add key erm_reboot_reasons_day_all in self-healing schema for tracking the reboot counts per day. [VISHNU] AIR-3648 : Modify the monitor mode service logic – ==Completed.== Add a new topic to be consumed from directly from Apisix. Add URL parsing logic from the HTTP log server service, and filter out only the GET calls. [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [VIMAL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] AIR-3651: Create DB/redis/Kafka connection & implement business logic for recommendation service – ==In Progress.== Establish a connection to the DB/redis/Kafka Subscribe to the topic that comes from the monitor mode service Once the monitor ack receives, this service will get the required fields value from runtime.journal with some timeframe and send it to the recommendation model. Once the model sends the response, we will update the cache with the recommended value, the status and timestamp. [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] AIR-3662: Create a DB connection and initialize server – ==In Progress.== Once the user hits API (api/v1/location?_sn=), a connection is established with Cassandra DB to fetch required data such as, id (MAC Address) and respective neighbors of the device. Established a connection with Postgres DB to insert the response received from Google API. [JYOSTNA] AIR-3664: Create postgres table – ==In Progress.== Created a table named geolocation with the necessary fields. Query for the same is as follows. CREATE TABLE IF NOT EXISTS geolocation (n id SERIAL PRIMARY KEY,n device_id TEXT NOT NULL,n customer TEXT NOT NULL,n latitude TEXT NOT NULL,n longitude TEXT NOT NULL,n accuracy TEXT NOT NULL,n map_url TEXT,n timestamp INT NOT NULL,n expires_at INT NOT NULLn); [SHRI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) ON HOLD: [VIMAL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SANJAY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!!
Status Update(13th - 17th)
Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Vimal: !!!! Vishnu: !!!! Ganapathy: !!!! Thangavel: !!!! Jyotsna: !!!! Shri Ramki: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [VIMAL] AIR-3683: Create a service to maintain devices last heard – ==Completed.== To maintain last heard timestamp for device we have created a new service Will keep track of last heard in the redis [VIMAL] AIR-3654, AIR-3649: Create a skeleton for geolocation service – ==Completed.== Created a proper base structure for new service with CI/CD pipeline [VISHNU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [VIMAL] AIR-3688: Create a pyspark script for getting hour based (type=client) orc data – ==In progress.== Created a script for getting private & export orc file for an hour Join those columns and drop duplicates Saved it to another hdfs path Tested in Prod [VISHNU] AIR-3644: Enable kafka plugin in the APISIX config – ==In Progress.== added the kafka plugin in apisix config, so that the api log message is directly published in the mentioned topic [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [VIRUTHIKA] AIR - 3655 : WEB change for geolocation – ==In Progress.== Completed the local set up of Docker, localhost. Completed the CSS part of the map in the dashboard. Working on slight modifications to be made similar to the Wireframe. JIRA TICKET(S) ON HOLD: [VIMAL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SANJAY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 3-4 points!! [VIRUTHIKA] !!JIRA-ID!!: !!Description!! – ==On Hold.== * !!Brief Summary of the ticket in 3-4 points!!
###Human:tell me about Status Update(13th - 17th) ###assistant: Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Vimal: !!!! Vishnu: !!!! Ganapathy: !!!! Thangavel: !!!! Jyotsna: !!!! Shri Ramki: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [VIMAL] AIR-3683: Create a service to maintain devices last heard – ==Completed.== To maintain last heard timestamp for device we have created a new service Will keep track of last heard in the redis [VIMAL] AIR-3654, AIR-3649: Create a skeleton for geolocation service – ==Completed.== Created a proper base structure for new service with CI/CD pipeline [VISHNU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [VIMAL] AIR-3688: Create a pyspark script for getting hour based (type=client) orc data – ==In progress.== Created a script for getting private & export orc file for an hour Join those columns and drop duplicates Saved it to another hdfs path Tested in Prod [VISHNU] AIR-3644: Enable kafka plugin in the APISIX config – ==In Progress.== added the kafka plugin in apisix config, so that the api log message is directly published in the mentioned topic [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 3-4 points!! [VIRUTHIKA] AIR - 3655 : WEB change for geolocation – ==In Progress.== Completed the local set up of Docker, localhost. Completed the CSS part of the map in the dashboard. Working on slight modifications to be made similar to the Wireframe. JIRA TICKET(S) ON HOLD: [VIMAL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [VISHNU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SANJAY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [GANAPATHY] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [THANGAVEL] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JYOSTNA] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHRI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 3-4 points!! [VIRUTHIKA] !!JIRA-ID!!: !!Description!! – ==On Hold.== * !!Brief Summary of the ticket in 3-4 points!!
Facebook Wav2vec2 Model
Overview: Research Paper: Transformer-based neural networks have been revolutionizing the natural language processing field, but are only starting to become popular in the speech processing community. Wav2vec 2.0 is set to change that. Its architecture is based on the Transformer’s encoder, with a training objective similar to BERT’s masked language modeling objective, but adapted for speech. This new method allows for efficient semi-supervised training: first, pre-train the model on a large quantity of unlabeled speech, then fine-tune on a smaller labeled dataset. In wav2vec 2.0’s original paper, the authors demonstrated that fine-tuning the model on only one hour of labeled speech data could beat the previous state-of-the-art systems trained on 100 times more labeled data. Training Phases: Comparison between two training phases: Architecture: First Phase Training (Self-supervised Learning): Final fine-tuned model: wav2vec 2.0 architecture and its pre-training process: Above is an overview of the wav2vec 2.0 architecture and its pre-training process. There are four important elements in this diagram: the feature encoder, context network, quantization module, and the contrastive loss (pre-training objective). Feature encoder: The feature encoder’s job is to reduce the dimensionality of the audio data, converting the raw waveform into a sequence of feature vectors Z0, Z1, Z2, …, ZT each 20 milliseconds. Its architecture is simple: a 7-layer convolutional neural network (single-dimensional) with 512 channels at each layer. The waveform is normalized before being sent to the network, and the kernel width and strides of the convolutional layers decrease as we get higher in the network. The feature encoder has a total receptive field of 400 samples or 25 ms of audio (audio data is encoded at a sample rate of 16 kHz). Quantization module: One of the main obstacles of using Transformers for speech processing is the continuous nature of speech. Written language can be naturally discretized into words or sub-words, therefore creating a finite vocabulary of discrete units. Speech doesn’t have such natural sub-units. We could use phones as a discrete system, but then we would need humans to first label the entire dataset beforehand, so we wouldn’t be able to pre-train on unlabeled data. Wav2vec 2.0 proposes to automatically learn discrete speech units, by sampling from the Gumbel-Softmax distribution. Possible units are made of codewords sampled from codebooks (groups). Codewords are then concatenated to form the final speech unit. Wav2vec uses 2 groups with 320 possible words in each group, hence a theoretical maximum of 320 x 320 = 102,400 speech units. The latent features are multiplied by the quantization matrix to give the logits: one score for each of the possible codewords in each codebook. The Gumbel-Softmax trick allows sampling a single codeword from each codebook, after converting these logits into probabilities. It is similar to taking the argmax except that the operation is fully differentiable. Moreover, a small randomness effect, whose effect is controlled by a temperature argument, is introduced to the sampling process to facilitate training and codewords utilization. Context network: The core of wav2vec 2.0 is its Transformer encoder, which takes as input the latent feature vectors and processes it through 12 Transformer blocks for the BASE version of the model, or 24 blocks for the LARGE version. To match the inner dimension of the Transformer encoder, the input sequence first needs to go through a feature projection layer to increase the dimension from 512 (output of the CNN) to 768 for BASE or 1,024 for LARGE. One difference from the original Transformer architecture is how positional information is added to the input. Since the self-attention operation of the Transformer doesn’t preserve the order of the input sequence, fixed pre-generated positional embeddings were added to the input vectors in the original implementation. The wav2vec model instead uses a new grouped convolution layer to learn relative positional embeddings by itself. Pre-training & contrastive loss: The pre-training process uses a contrastive task to train on unlabeled speech data. A mask is first randomly applied in the latent space, where ~50% of the projected latent feature vectors. Masked positions are then replaced by the same trained vector Z’M before being fed to the Transformer network. nThe final context vectors then go through the last projection layer to match the dimension of the quantized speech units Qt. For each masked position, 100 negative distractors are uniformly sampled from other positions in the same sentence. The model then compares the similarity (cosine similarity) between the projected context vector C’t and the true positive target Qp along with all negative distractors Qñ. The contrastive loss then encourages high similarity with the true positive target and penalizes high similarity scores with negative distractors. Diversity loss: During pre-training, another loss is added to the contrastive loss to encourage the model to use all codewords equally often. This works by maximizing the entropy) of the Gumbel-Softmax distribution, preventing the model to always choose from a small sub-group of all available codebook entries. Conclusion: This concludes our tour of wav2vec 2.0 and its pre-training process. The resulting pre-trained model can be used for a variety of speech downstream tasks: automatic speech recognition, emotion detection, speaker recognition, language detection.n
###Human:tell me about Facebook Wav2vec2 Model ###assistant: Overview: Research Paper: Transformer-based neural networks have been revolutionizing the natural language processing field, but are only starting to become popular in the speech processing community. Wav2vec 2.0 is set to change that. Its architecture is based on the Transformer’s encoder, with a training objective similar to BERT’s masked language modeling objective, but adapted for speech. This new method allows for efficient semi-supervised training: first, pre-train the model on a large quantity of unlabeled speech, then fine-tune on a smaller labeled dataset. In wav2vec 2.0’s original paper, the authors demonstrated that fine-tuning the model on only one hour of labeled speech data could beat the previous state-of-the-art systems trained on 100 times more labeled data. Training Phases: Comparison between two training phases: Architecture: First Phase Training (Self-supervised Learning): Final fine-tuned model: wav2vec 2.0 architecture and its pre-training process: Above is an overview of the wav2vec 2.0 architecture and its pre-training process. There are four important elements in this diagram: the feature encoder, context network, quantization module, and the contrastive loss (pre-training objective). Feature encoder: The feature encoder’s job is to reduce the dimensionality of the audio data, converting the raw waveform into a sequence of feature vectors Z0, Z1, Z2, …, ZT each 20 milliseconds. Its architecture is simple: a 7-layer convolutional neural network (single-dimensional) with 512 channels at each layer. The waveform is normalized before being sent to the network, and the kernel width and strides of the convolutional layers decrease as we get higher in the network. The feature encoder has a total receptive field of 400 samples or 25 ms of audio (audio data is encoded at a sample rate of 16 kHz). Quantization module: One of the main obstacles of using Transformers for speech processing is the continuous nature of speech. Written language can be naturally discretized into words or sub-words, therefore creating a finite vocabulary of discrete units. Speech doesn’t have such natural sub-units. We could use phones as a discrete system, but then we would need humans to first label the entire dataset beforehand, so we wouldn’t be able to pre-train on unlabeled data. Wav2vec 2.0 proposes to automatically learn discrete speech units, by sampling from the Gumbel-Softmax distribution. Possible units are made of codewords sampled from codebooks (groups). Codewords are then concatenated to form the final speech unit. Wav2vec uses 2 groups with 320 possible words in each group, hence a theoretical maximum of 320 x 320 = 102,400 speech units. The latent features are multiplied by the quantization matrix to give the logits: one score for each of the possible codewords in each codebook. The Gumbel-Softmax trick allows sampling a single codeword from each codebook, after converting these logits into probabilities. It is similar to taking the argmax except that the operation is fully differentiable. Moreover, a small randomness effect, whose effect is controlled by a temperature argument, is introduced to the sampling process to facilitate training and codewords utilization. Context network: The core of wav2vec 2.0 is its Transformer encoder, which takes as input the latent feature vectors and processes it through 12 Transformer blocks for the BASE version of the model, or 24 blocks for the LARGE version. To match the inner dimension of the Transformer encoder, the input sequence first needs to go through a feature projection layer to increase the dimension from 512 (output of the CNN) to 768 for BASE or 1,024 for LARGE. One difference from the original Transformer architecture is how positional information is added to the input. Since the self-attention operation of the Transformer doesn’t preserve the order of the input sequence, fixed pre-generated positional embeddings were added to the input vectors in the original implementation. The wav2vec model instead uses a new grouped convolution layer to learn relative positional embeddings by itself. Pre-training & contrastive loss: The pre-training process uses a contrastive task to train on unlabeled speech data. A mask is first randomly applied in the latent space, where ~50% of the projected latent feature vectors. Masked positions are then replaced by the same trained vector Z’M before being fed to the Transformer network. nThe final context vectors then go through the last projection layer to match the dimension of the quantized speech units Qt. For each masked position, 100 negative distractors are uniformly sampled from other positions in the same sentence. The model then compares the similarity (cosine similarity) between the projected context vector C’t and the true positive target Qp along with all negative distractors Qñ. The contrastive loss then encourages high similarity with the true positive target and penalizes high similarity scores with negative distractors. Diversity loss: During pre-training, another loss is added to the contrastive loss to encourage the model to use all codewords equally often. This works by maximizing the entropy) of the Gumbel-Softmax distribution, preventing the model to always choose from a small sub-group of all available codebook entries. Conclusion: This concludes our tour of wav2vec 2.0 and its pre-training process. The resulting pre-trained model can be used for a variety of speech downstream tasks: automatic speech recognition, emotion detection, speaker recognition, language detection.n
Whisper Model
Architecture: Whisper architecture uses an encoder-decoder transformer, which as the name implies consists of an encoder and a decoder. The encoder obtains a latent representation from speech, and the decoder outputs text from the latent representation. Audio files are handled in 16 kHz PCM format scaled to a range of -1 to 1 and frequency converted with an 80-channel Mel Spectrogram. The window size for the Mel Spectrogram conversion is 25 ms and the stride is 10 ms. The Mel Spectrogram is used in segments of 30 seconds each. The encoder is run only once per 30-second segment to extract the latent representation from the spectrogram. First, it applies twice a convolution using GELU for activation with a filter size of 3 to compute input embedding. The second convolution has a stride of 2. The transformer performs position embedding using Sin function. Since the encoder is executed only once per 30-second segment, so the load is not very high. The decoder outputs the probability of occurrence for each one of the 51865 tokens from the latent representation. The tokens are determined by performing a Greedy Search or Beam Search on the probability of occurrence of the tokens in the output. The beam search has a beam size (number of search branches) of 5. The decoder outputs a maximum of 224 tokens per 30-second segment, so it is executed up to 224 times. If two consecutive timestamp tokens appear in the decoding results across 224 inferences, the token sequence of speech recognition is chosen and output. For the chosen token sequence, GPT2TokenizerFast decodes it into text. The architecture is a byte-level BPE text tokenizer, which does not output words, but Unicode byte codes. The timestamp gives you the number of seconds of actual speech recognition that was done. Then cut out the unprocessed part of the Mel Spectrogram, add a Mel Spectrogram to make it 30 seconds, and repeat the decoding process again. Both encoder and decoder have the same transformer architecture. Keywords: Mel Spectogram A mel spectrogram combines the concept of a spectrogram with the mel scale. It calculates the spectrogram of the audio signal as usual. However, instead of using the linear frequency scale on the y-axis, it uses the mel scale. This results in a spectrogram where the frequency bins are spaced according to human auditory perception. Multi Layer Perceptron Simple Architecture: An MLP in a transformer typically consists of two fully-connected (dense) layers with an activation function in between. The activation function, often GELU (Gaussian Error Linear Unit) in transformers, adds non-linearity. Self Attention Layers In an encoder with self-attention layers, each element in the sequence attends to (focuses on) other relevant elements within the same sequence. This allows the model to capture long-range dependencies and understand the context of each audio segment. Cross Attention Layers In Whisper model, the decoder use cross-attention. The decoder takes a previously translated word (query) and attends to the encoded source sentence (keys and values) to generate the next translated word. It focuses on parts of the source language most relevant to the current translation step.
###Human:tell me about Whisper Model ###assistant: Architecture: Whisper architecture uses an encoder-decoder transformer, which as the name implies consists of an encoder and a decoder. The encoder obtains a latent representation from speech, and the decoder outputs text from the latent representation. Audio files are handled in 16 kHz PCM format scaled to a range of -1 to 1 and frequency converted with an 80-channel Mel Spectrogram. The window size for the Mel Spectrogram conversion is 25 ms and the stride is 10 ms. The Mel Spectrogram is used in segments of 30 seconds each. The encoder is run only once per 30-second segment to extract the latent representation from the spectrogram. First, it applies twice a convolution using GELU for activation with a filter size of 3 to compute input embedding. The second convolution has a stride of 2. The transformer performs position embedding using Sin function. Since the encoder is executed only once per 30-second segment, so the load is not very high. The decoder outputs the probability of occurrence for each one of the 51865 tokens from the latent representation. The tokens are determined by performing a Greedy Search or Beam Search on the probability of occurrence of the tokens in the output. The beam search has a beam size (number of search branches) of 5. The decoder outputs a maximum of 224 tokens per 30-second segment, so it is executed up to 224 times. If two consecutive timestamp tokens appear in the decoding results across 224 inferences, the token sequence of speech recognition is chosen and output. For the chosen token sequence, GPT2TokenizerFast decodes it into text. The architecture is a byte-level BPE text tokenizer, which does not output words, but Unicode byte codes. The timestamp gives you the number of seconds of actual speech recognition that was done. Then cut out the unprocessed part of the Mel Spectrogram, add a Mel Spectrogram to make it 30 seconds, and repeat the decoding process again. Both encoder and decoder have the same transformer architecture. Keywords: Mel Spectogram A mel spectrogram combines the concept of a spectrogram with the mel scale. It calculates the spectrogram of the audio signal as usual. However, instead of using the linear frequency scale on the y-axis, it uses the mel scale. This results in a spectrogram where the frequency bins are spaced according to human auditory perception. Multi Layer Perceptron Simple Architecture: An MLP in a transformer typically consists of two fully-connected (dense) layers with an activation function in between. The activation function, often GELU (Gaussian Error Linear Unit) in transformers, adds non-linearity. Self Attention Layers In an encoder with self-attention layers, each element in the sequence attends to (focuses on) other relevant elements within the same sequence. This allows the model to capture long-range dependencies and understand the context of each audio segment. Cross Attention Layers In Whisper model, the decoder use cross-attention. The decoder takes a previously translated word (query) and attends to the encoded source sentence (keys and values) to generate the next translated word. It focuses on parts of the source language most relevant to the current translation step.
Prompt Engine
What is a prompt engine? > A prompt engine is a system or tool that generates prompts or instructions to guide users in providing input or interacting with a system. In the context of natural language processing (NLP) or machine learning (ML), a prompt engine can generate prompts that elicit specific responses from users to train or fine-tune models, or it can generate prompts to direct the behavior of AI models during inference. Why a prompt engine? Potential Models In-house Models Model Token Parameters Source Performance Dataset Endorsed Model Dataset Resource Required > Model Strategy Parameters CPU RAM :---::---::---::---::---: OpenELM-270M Layer-wise scaling 270M 6 8GB OpenELM-3B Layer-wise scaling 3B 6 8GB Stratature-B Coherent Modeling 100B 48 128GB Phi-3 Layer-wise scaling 3.8B 8 10GB Meta-Llama-3-70b Grouped-Query Attention 70B 16 16GB Proof Of Concept Complexity Integrations Source? Optimization/Quantization techniques endorsed? Deployments Inherit Use Cases Use cases in Astral (A service which collects data from Accesspoint’s to comprehend information of access points)?3
###Human:tell me about Prompt Engine ###assistant: What is a prompt engine? > A prompt engine is a system or tool that generates prompts or instructions to guide users in providing input or interacting with a system. In the context of natural language processing (NLP) or machine learning (ML), a prompt engine can generate prompts that elicit specific responses from users to train or fine-tune models, or it can generate prompts to direct the behavior of AI models during inference. Why a prompt engine? Potential Models In-house Models Model Token Parameters Source Performance Dataset Endorsed Model Dataset Resource Required > Model Strategy Parameters CPU RAM :---::---::---::---::---: OpenELM-270M Layer-wise scaling 270M 6 8GB OpenELM-3B Layer-wise scaling 3B 6 8GB Stratature-B Coherent Modeling 100B 48 128GB Phi-3 Layer-wise scaling 3.8B 8 10GB Meta-Llama-3-70b Grouped-Query Attention 70B 16 16GB Proof Of Concept Complexity Integrations Source? Optimization/Quantization techniques endorsed? Deployments Inherit Use Cases Use cases in Astral (A service which collects data from Accesspoint’s to comprehend information of access points)?3
Gitlab Upgrade version 13.8.8 to 13.12.15
Stage 1 : Upgrade Test Machine from Gitlab v13.8.8 to v13.12.15 Test Machine : 192.168.138.235 Current Test Machine After Upgrade Gitlab Version 13.12.15-ce.0.el7 13.8.8 13.12.15 Dependent Version Ruby 2.7.2p137 2.7.2p137 Gem 3.1.4 3.1.4 Bundler 2.1.4 2.1.4 Rake 13.0.3 13.0.3 Redis 5.0.9 6.0.14 Git 2.29.0 2.31.1 Sidekiq 5.2.9 5.2.9 Go version unknown unknown Gitlab shell 13.15.1 13.18.1 DB PostgreSql PostgreSql DB version 12.6 12.6 Release ce.0.el7 ce.0.el7 Architecture x86_64 x86_64 Gitlab Workhorse v8.59.3 6.0.14 Gitlab API v4 v4 Rails 6.0.3.4 6.0.3.7 PostgreSQL Version 13.8.8. Official Link : Steps Followed: 1. Using Yum Install , install the specific version Command : yum install gitlab-ce- eg : yum install gitlab-ce-13.12.15-ce.0.el7 2. Check for specific PostgreSQL Version , if not updated try to restart PostgreSQL Command : gitlab-ctl restart postgresql 3. sudo gitlab-ctl reconfigure 4. Check your respective dependencies and gitlab version Command : gitlab-rake gitlab:env:info Stage 1 : Upgrade Prod Machine from Gitlab v13.1.11 to v13.8.8 Current Production Machine After Upgrade Gitlab Version 13.12.15-ce.0.el7 13.8.8 13.12.15 Dependent Version Ruby 2.7.2p137 Gem 3.1.4 Bundler 2.1.4 Rake 13.0.3 Redis 5.0.9 Git 2.29.0 Sidekiq 5.2.9 Go version unknown Gitlab shell 13.15.1 DB PostgreSql DB version 12.6 Release ce.0.el7 Architecture x86_64 Gitlab Workhorse v8.59.3 Gitlab API v4 Rails 6.0.3.4
###Human:tell me about Gitlab Upgrade version 13.8.8 to 13.12.15 ###assistant: Stage 1 : Upgrade Test Machine from Gitlab v13.8.8 to v13.12.15 Test Machine : 192.168.138.235 Current Test Machine After Upgrade Gitlab Version 13.12.15-ce.0.el7 13.8.8 13.12.15 Dependent Version Ruby 2.7.2p137 2.7.2p137 Gem 3.1.4 3.1.4 Bundler 2.1.4 2.1.4 Rake 13.0.3 13.0.3 Redis 5.0.9 6.0.14 Git 2.29.0 2.31.1 Sidekiq 5.2.9 5.2.9 Go version unknown unknown Gitlab shell 13.15.1 13.18.1 DB PostgreSql PostgreSql DB version 12.6 12.6 Release ce.0.el7 ce.0.el7 Architecture x86_64 x86_64 Gitlab Workhorse v8.59.3 6.0.14 Gitlab API v4 v4 Rails 6.0.3.4 6.0.3.7 PostgreSQL Version 13.8.8. Official Link : Steps Followed: 1. Using Yum Install , install the specific version Command : yum install gitlab-ce- eg : yum install gitlab-ce-13.12.15-ce.0.el7 2. Check for specific PostgreSQL Version , if not updated try to restart PostgreSQL Command : gitlab-ctl restart postgresql 3. sudo gitlab-ctl reconfigure 4. Check your respective dependencies and gitlab version Command : gitlab-rake gitlab:env:info Stage 1 : Upgrade Prod Machine from Gitlab v13.1.11 to v13.8.8 Current Production Machine After Upgrade Gitlab Version 13.12.15-ce.0.el7 13.8.8 13.12.15 Dependent Version Ruby 2.7.2p137 Gem 3.1.4 Bundler 2.1.4 Rake 13.0.3 Redis 5.0.9 Git 2.29.0 Sidekiq 5.2.9 Go version unknown Gitlab shell 13.15.1 DB PostgreSql DB version 12.6 Release ce.0.el7 Architecture x86_64 Gitlab Workhorse v8.59.3 Gitlab API v4 Rails 6.0.3.4
STM32 MLPerf End To End Setup
Required Components STM32 Energy Monitor Level Shifters Arduino UNO USB B For Arduino USB Type A To B Jumper Wires (Male To Female, Female To Female, Male To Male) Performance Mode - Hardware Setup Setup the STM32 as shown in the below image with the system. Performance Mode - Software Setup Download the Energy Runner Framework EEMBC which helps the visualize both the performance and the energy modes of the particular model for the particular DUT (Device Under Test) EnergyRunner® Benchmark Framework for MLPerf Tiny - Energy Measurement - EEMBC - Embedded Microprocessor Benchmark Consortium - NOTE : Create an account in the EEMBC and download the software After connecting the STM32 in the system, this holds on to a COM port and that can be visualized int he EEMBC Framework. *
###Human:tell me about STM32 MLPerf End To End Setup ###assistant: Required Components STM32 Energy Monitor Level Shifters Arduino UNO USB B For Arduino USB Type A To B Jumper Wires (Male To Female, Female To Female, Male To Male) Performance Mode - Hardware Setup Setup the STM32 as shown in the below image with the system. Performance Mode - Software Setup Download the Energy Runner Framework EEMBC which helps the visualize both the performance and the energy modes of the particular model for the particular DUT (Device Under Test) EnergyRunner® Benchmark Framework for MLPerf Tiny - Energy Measurement - EEMBC - Embedded Microprocessor Benchmark Consortium - NOTE : Create an account in the EEMBC and download the software After connecting the STM32 in the system, this holds on to a COM port and that can be visualized int he EEMBC Framework. *
GET item from sharepoint
Identify the site ID 2. Identify the drive ID 3. Identify the item ID 4. Download the Item ID with the generated session token Get Drive ID GET ITEM ID Identify Item ID GET File
###Human:tell me about GET item from sharepoint ###assistant: Identify the site ID 2. Identify the drive ID 3. Identify the item ID 4. Download the Item ID with the generated session token Get Drive ID GET ITEM ID Identify Item ID GET File
SWin Transformers
Vision Transformers (ViT) Reference Paper: ViTs treat images as sequences of patches, borrowing a page from NLP where words are treated as tokens. This fundamental difference in data processing coupled with the power of self-attention, enables ViTs to learn intricate patterns and relationships within images, gives ViTs a unique advantage While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. How ViT Works: The concept of self-attention has been adapted for processing images with the use of Vision Transformers. Unlike text data, images are inherently two-dimensional, comprising pixels arranged in rows and columns. To address this challenge, ViTs convert images into sequences that can be processed by the Transformer. Split an image into patches: The first step in processing an image with a Vision Transformer is to divide it into smaller, fixed-size patches. Each patch represents a local region of the image. Flatten the patches: Within each patch, the pixel values are flattened into a single vector. This flattening process allows the model to treat image patches as sequential data. Produce lower-dimensional linear embeddings: These flattened patch vectors are then projected into a lower-dimensional space using trainable linear transformations. This step reduces the dimensionality of the data while preserving important features. Add positional encodings: To retain information about the spatial arrangement of the patches, positional encodings are added. These encodings help the model understand the relative positions of different patches in the image. Feed the sequence into a Transformer encoder: The input to a standard Transformer encoder comprises the sequence of patch embeddings and positional embeddings. This encoder is composed of multiple layers, each containing two critical components: multi-head self-attention mechanisms (MSPs), responsible for calculating attention weights to prioritize input sequence elements during predictions, and multi-layer perceptron (MLP) blocks. Before each block, layer normalization (LN) is applied to appropriately scale and center the data within the layer, ensuring stability and efficiency during training. During the training, an optimizer is also used to adjust the model's hyperparameters in response to the loss computed during each training iteration. Classification Token: To enable image classification, a special "classification token" is prepended to the sequence of patch embeddings. This token's state at the output of the Transformer encoder serves as the representation of the entire image. Following the original Vision Transformer, some follow-up works have been made: DeiT (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers. The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into ViTModel or ViTForImageClassification. There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224, facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should use DeiTImageProcessor in order to prepare images for the model. BEiT (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE. DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting objects, without having ever been trained to do so. DINO checkpoints can be found on the hub. MAE (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion (75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms supervised pre-training after fine-tuning. SWin Transformers: This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with Shifed Windows. A key design element of Swin Transformer is its shift of the window partition between consecutive self-attention layers,. The shifted windows bridge the windows of the preceding layer, providing connections among them that significantly enhance modeling power . This strategy is also efficient in regards to real-world latency: all query patches within a window share the same key set1 , which facilitates memory access in hardware. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. Architecture: An overview of the Swin Transformer architecture, which illustrates the tiny version (SwinT). It first splits an input RGB image into non-overlapping patches by a patch splitting module, like ViT. Each patch is treated as a “token” and its feature is set as a concatenation of the raw pixel RGB values. In our implementation, we use a patch size of 4 × 4 and thus the feature dimension of each patch is 4 × 4 × 3 = 48. A linear embedding layer is applied on this raw-valued feature to project it to an arbitrary dimension (denoted as C). Several Transformer blocks with modified self-attention computation (Swin Transformer blocks) are applied on these patch tokens Swin Transformer block Swin Transformer is built by replacing the standard multi-head self attention (MSA) module in a Transformer block by a module based on shifted windows , with other layers kept the same.A Swin Transformer block consists of a shifted window based MSA module, followed by a 2-layer MLP with GELU nonlinearity in between. A LayerNorm (LN) layer is applied before each MSA module and each MLP, and a residual connection is applied after each module.
###Human:tell me about SWin Transformers ###assistant: Vision Transformers (ViT) Reference Paper: ViTs treat images as sequences of patches, borrowing a page from NLP where words are treated as tokens. This fundamental difference in data processing coupled with the power of self-attention, enables ViTs to learn intricate patterns and relationships within images, gives ViTs a unique advantage While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. How ViT Works: The concept of self-attention has been adapted for processing images with the use of Vision Transformers. Unlike text data, images are inherently two-dimensional, comprising pixels arranged in rows and columns. To address this challenge, ViTs convert images into sequences that can be processed by the Transformer. Split an image into patches: The first step in processing an image with a Vision Transformer is to divide it into smaller, fixed-size patches. Each patch represents a local region of the image. Flatten the patches: Within each patch, the pixel values are flattened into a single vector. This flattening process allows the model to treat image patches as sequential data. Produce lower-dimensional linear embeddings: These flattened patch vectors are then projected into a lower-dimensional space using trainable linear transformations. This step reduces the dimensionality of the data while preserving important features. Add positional encodings: To retain information about the spatial arrangement of the patches, positional encodings are added. These encodings help the model understand the relative positions of different patches in the image. Feed the sequence into a Transformer encoder: The input to a standard Transformer encoder comprises the sequence of patch embeddings and positional embeddings. This encoder is composed of multiple layers, each containing two critical components: multi-head self-attention mechanisms (MSPs), responsible for calculating attention weights to prioritize input sequence elements during predictions, and multi-layer perceptron (MLP) blocks. Before each block, layer normalization (LN) is applied to appropriately scale and center the data within the layer, ensuring stability and efficiency during training. During the training, an optimizer is also used to adjust the model's hyperparameters in response to the loss computed during each training iteration. Classification Token: To enable image classification, a special "classification token" is prepended to the sequence of patch embeddings. This token's state at the output of the Transformer encoder serves as the representation of the entire image. Following the original Vision Transformer, some follow-up works have been made: DeiT (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers. The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into ViTModel or ViTForImageClassification. There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224, facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should use DeiTImageProcessor in order to prepare images for the model. BEiT (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE. DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting objects, without having ever been trained to do so. DINO checkpoints can be found on the hub. MAE (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion (75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms supervised pre-training after fine-tuning. SWin Transformers: This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high resolution of pixels in images compared to words in text. To address these differences, we propose a hierarchical Transformer whose representation is computed with Shifed Windows. A key design element of Swin Transformer is its shift of the window partition between consecutive self-attention layers,. The shifted windows bridge the windows of the preceding layer, providing connections among them that significantly enhance modeling power . This strategy is also efficient in regards to real-world latency: all query patches within a window share the same key set1 , which facilitates memory access in hardware. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size. These qualities of Swin Transformer make it compatible with a broad range of vision tasks, including image classification (87.3 top-1 accuracy on ImageNet-1K) and dense prediction tasks such as object detection (58.7 box AP and 51.1 mask AP on COCO test-dev) and semantic segmentation (53.5 mIoU on ADE20K val). Its performance surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones. The hierarchical design and the shifted window approach also prove beneficial for all-MLP architectures. Architecture: An overview of the Swin Transformer architecture, which illustrates the tiny version (SwinT). It first splits an input RGB image into non-overlapping patches by a patch splitting module, like ViT. Each patch is treated as a “token” and its feature is set as a concatenation of the raw pixel RGB values. In our implementation, we use a patch size of 4 × 4 and thus the feature dimension of each patch is 4 × 4 × 3 = 48. A linear embedding layer is applied on this raw-valued feature to project it to an arbitrary dimension (denoted as C). Several Transformer blocks with modified self-attention computation (Swin Transformer blocks) are applied on these patch tokens Swin Transformer block Swin Transformer is built by replacing the standard multi-head self attention (MSA) module in a Transformer block by a module based on shifted windows , with other layers kept the same.A Swin Transformer block consists of a shifted window based MSA module, followed by a 2-layer MLP with GELU nonlinearity in between. A LayerNorm (LN) layer is applied before each MSA module and each MLP, and a residual connection is applied after each module.
Notes
Bottleneck The bottleneck in a neural network is just a layer with fewer neurons than the layer below or above it. Having such a layer encourages the network to compress feature representations (of salient features for the target variable) to best fit in the available space. Improvements to compression occur due to the goal of reducing the cost function, as for all weight updates. C2F "Coarse to fine" networks, also known as hierarchical networks or multi-resolution networks, are a type of neural network architecture that processes input data at multiple levels of detail or resolution. These networks are designed to effectively capture both global context and fine-grained details in the input data. Inverted Residual An inverted residual function, also known as an MBConv block, is a specific type of residual block used in convolutional neural networks (CNNs).It's designed to be more efficient than traditional residual blocks, particularly for applications on mobile devices with limited processing power. CNN A convolutional network comprises layers of learnable filters that convolve over the input image. These filters are designed to detect specific features, such as edges, textures, or more complex patterns. Additionally, pooling layers downsample the feature maps, gradually reducing the spatial dimensions while retaining essential information. This hierarchical approach allows CNNs to learn and represent hierarchical features, capturing intricate details as they progress through the network.
###Human:tell me about Notes ###assistant: Bottleneck The bottleneck in a neural network is just a layer with fewer neurons than the layer below or above it. Having such a layer encourages the network to compress feature representations (of salient features for the target variable) to best fit in the available space. Improvements to compression occur due to the goal of reducing the cost function, as for all weight updates. C2F "Coarse to fine" networks, also known as hierarchical networks or multi-resolution networks, are a type of neural network architecture that processes input data at multiple levels of detail or resolution. These networks are designed to effectively capture both global context and fine-grained details in the input data. Inverted Residual An inverted residual function, also known as an MBConv block, is a specific type of residual block used in convolutional neural networks (CNNs).It's designed to be more efficient than traditional residual blocks, particularly for applications on mobile devices with limited processing power. CNN A convolutional network comprises layers of learnable filters that convolve over the input image. These filters are designed to detect specific features, such as edges, textures, or more complex patterns. Additionally, pooling layers downsample the feature maps, gradually reducing the spatial dimensions while retaining essential information. This hierarchical approach allows CNNs to learn and represent hierarchical features, capturing intricate details as they progress through the network.
Apple's OpenELM
OpenELM: An Efficient Language Model Family with Open Training and Inference Framework What is OpenELM? OpenELM consists of a family of eight large language models, with parameters ranging from 270 million to 3 billion. These models are engineered to optimize both the efficiency and accuracy of AI functionalities on consumer devices, using a pioneering layer-wise scaling strategy. This technique ensures an optimal distribution of parameters across the transformer model layers, leading to a significant improvement in performance. Notably, OpenELM models demonstrate a 2.36% increase in accuracy over other models with similar configurations. OpenELM was pretrained on public datasets. Specifically, our pre-training dataset contains RefinedWeb, PILE, a subset of RedPajama, and a subset of Dolma v1.6. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. OpenELM models were pretrained using the CoreNet library. Released both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. The pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. What are parameters? the term "parameters" refers to the number of trainable variables within the model architecture. These parameters essentially capture the model's understanding of the relationships between words and concepts based on the data it was trained on. OpenELM's Approach: Apple's OpenELM project offers several pre-trained models with varying parameter sizes (e.g., 270 million, 450 million, 1.1 billion, and 3 billion). This allows users to choose a model that best suits their needs and computational resources. Models with fewer parameters are typically faster to run on devices with limited processing power but might have slightly lower accuracy. Steps to run OpenELM clone the repo from hugging face Create a new access token Run the below commands for the respective parameter model 270M, 450M, 1B, 3B Sample output: Reference link: *
###Human:tell me about Apple's OpenELM ###assistant: OpenELM: An Efficient Language Model Family with Open Training and Inference Framework What is OpenELM? OpenELM consists of a family of eight large language models, with parameters ranging from 270 million to 3 billion. These models are engineered to optimize both the efficiency and accuracy of AI functionalities on consumer devices, using a pioneering layer-wise scaling strategy. This technique ensures an optimal distribution of parameters across the transformer model layers, leading to a significant improvement in performance. Notably, OpenELM models demonstrate a 2.36% increase in accuracy over other models with similar configurations. OpenELM was pretrained on public datasets. Specifically, our pre-training dataset contains RefinedWeb, PILE, a subset of RedPajama, and a subset of Dolma v1.6. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. OpenELM models were pretrained using the CoreNet library. Released both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. The pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. What are parameters? the term "parameters" refers to the number of trainable variables within the model architecture. These parameters essentially capture the model's understanding of the relationships between words and concepts based on the data it was trained on. OpenELM's Approach: Apple's OpenELM project offers several pre-trained models with varying parameter sizes (e.g., 270 million, 450 million, 1.1 billion, and 3 billion). This allows users to choose a model that best suits their needs and computational resources. Models with fewer parameters are typically faster to run on devices with limited processing power but might have slightly lower accuracy. Steps to run OpenELM clone the repo from hugging face Create a new access token Run the below commands for the respective parameter model 270M, 450M, 1B, 3B Sample output: Reference link: *
APPLE OpenELM
OpenELM: An Efficient Language Model Family with Open Training and Inference Framework What is OpenELM? OpenELM consists of a family of eight large language models, with parameters ranging from 270 million to 3 billion. These models are engineered to optimize both the efficiency and accuracy of AI functionalities on consumer devices, using a pioneering layer-wise scaling strategy. This technique ensures an optimal distribution of parameters across the transformer model layers, leading to a significant improvement in performance. Notably, OpenELM models demonstrate a 2.36% increase in accuracy over other models with similar configurations. OpenELM was pretrained on public datasets. Specifically, our pre-training dataset contains RefinedWeb, PILE, a subset of RedPajama, and a subset of Dolma v1.6. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. OpenELM models were pretrained using the CoreNet library. Released both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. The pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. The model can be run using, 270M, 450M, 1B, 3B
###Human:tell me about APPLE OpenELM ###assistant: OpenELM: An Efficient Language Model Family with Open Training and Inference Framework What is OpenELM? OpenELM consists of a family of eight large language models, with parameters ranging from 270 million to 3 billion. These models are engineered to optimize both the efficiency and accuracy of AI functionalities on consumer devices, using a pioneering layer-wise scaling strategy. This technique ensures an optimal distribution of parameters across the transformer model layers, leading to a significant improvement in performance. Notably, OpenELM models demonstrate a 2.36% increase in accuracy over other models with similar configurations. OpenELM was pretrained on public datasets. Specifically, our pre-training dataset contains RefinedWeb, PILE, a subset of RedPajama, and a subset of Dolma v1.6. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. OpenELM models were pretrained using the CoreNet library. Released both pretrained and instruction tuned models with 270M, 450M, 1.1B and 3B parameters. The pre-training dataset contains RefinedWeb, deduplicated PILE, a subset of RedPajama, and a subset of Dolma v1.6, totaling approximately 1.8 trillion tokens. The model can be run using, 270M, 450M, 1B, 3B
Flash image
Updating the firmware via USB Connect a C type USB cable to the USB 2.0 port of sl1680 and another end to the laptop. n Download WinUSB driver (for windows).ngit clone n After downloading and decompressing the USB Boot software package, right click on the SYNA_WinUSB.inf file in the Synaptics_WinUSB_Driver directory. Select “Install” from the drop-down menu. Now run the run.bat file which is present in the usbboot directory of the respective device. n After running the run.bat a window will open showing the status of the flash process. Once the usb_boot environment has been setup and the usb_boot tool is running on the host system, Astra Machina will need to be placed into USB Boot mode. To do that press and hold the “USB_BOOT” button on the I/O board. Then press and release the “RESET” button. Be sure to hold the “USB_BOOT” button long enough so that the board can reset and detect that the “USB_BOOT” button is pressed. You will get logs like this in command prompt.n After booting into USB Boot mode, the U-Boot prompt “=>” will be displayed in the serial console.n Now flash the image by copying the image file directory (SYNAIMG) into the following directory ‘usb-toolusbboot_sl1680_uboot_rdkimages’. Run the below command to flash the image You will get logs like this in the serial consolen Reset the device by pressing the reset button in the device or run the command ‘reset’ for booting Now the image is flashed successfully using USB.n
###Human:tell me about Flash image ###assistant: Updating the firmware via USB Connect a C type USB cable to the USB 2.0 port of sl1680 and another end to the laptop. n Download WinUSB driver (for windows).ngit clone n After downloading and decompressing the USB Boot software package, right click on the SYNA_WinUSB.inf file in the Synaptics_WinUSB_Driver directory. Select “Install” from the drop-down menu. Now run the run.bat file which is present in the usbboot directory of the respective device. n After running the run.bat a window will open showing the status of the flash process. Once the usb_boot environment has been setup and the usb_boot tool is running on the host system, Astra Machina will need to be placed into USB Boot mode. To do that press and hold the “USB_BOOT” button on the I/O board. Then press and release the “RESET” button. Be sure to hold the “USB_BOOT” button long enough so that the board can reset and detect that the “USB_BOOT” button is pressed. You will get logs like this in command prompt.n After booting into USB Boot mode, the U-Boot prompt “=>” will be displayed in the serial console.n Now flash the image by copying the image file directory (SYNAIMG) into the following directory ‘usb-toolusbboot_sl1680_uboot_rdkimages’. Run the below command to flash the image You will get logs like this in the serial consolen Reset the device by pressing the reset button in the device or run the command ‘reset’ for booting Now the image is flashed successfully using USB.n
Phase 1 User Persona
In the Phase - I we are planning to launch a model zoo website which gives different model details / metrics with respective executed platform information. The user persona who is going to be involved in Phase - I is a developer. Overall Flow Detailed Steps The user enters the Model Zoo website and the landing page displays the different domains and the different usecases under each domain. When the use selects the usecase the user will be directed to the model under the usecase . There will also be a global search page that provides several filters based on the use case , the applicable scenarios , the supported chipsets and the user can select any model based on the filters. The model based on the user’s search will be displayed as a card with the qualifiers for why the model has to be chosen. The qualifiers include the energy consumed by the model in Joules , the model release date , target usecases and the model download size (tflite). Based on the qualifiers the user can choose any model of his choice and under the model details page the model details with the download formats and the different versions of the models will be displayed for the user to download. The github code link and the research paper link from where the model was built would be present in the page. The user would also be given an insight into the model metrics with a summary of the profiling and a deeper dive into the layer wise model metrics based on the model profiling so that the user can best decide which model would suit their usecase. Once the user has downloaded the model the user can use the model for their usecase or they can use the model to deploy in the edge device such as Raspberry PI using the CLI tool which is detailed in the below file. The CLI tool contains the instructions on how to use the tool and how to deploy / run the models in different supported platforms. GitHub link for the CLI tool CLI usage steps: eurmlsdk_README (1).md 3853
###Human:tell me about Phase 1 User Persona ###assistant: In the Phase - I we are planning to launch a model zoo website which gives different model details / metrics with respective executed platform information. The user persona who is going to be involved in Phase - I is a developer. Overall Flow Detailed Steps The user enters the Model Zoo website and the landing page displays the different domains and the different usecases under each domain. When the use selects the usecase the user will be directed to the model under the usecase . There will also be a global search page that provides several filters based on the use case , the applicable scenarios , the supported chipsets and the user can select any model based on the filters. The model based on the user’s search will be displayed as a card with the qualifiers for why the model has to be chosen. The qualifiers include the energy consumed by the model in Joules , the model release date , target usecases and the model download size (tflite). Based on the qualifiers the user can choose any model of his choice and under the model details page the model details with the download formats and the different versions of the models will be displayed for the user to download. The github code link and the research paper link from where the model was built would be present in the page. The user would also be given an insight into the model metrics with a summary of the profiling and a deeper dive into the layer wise model metrics based on the model profiling so that the user can best decide which model would suit their usecase. Once the user has downloaded the model the user can use the model for their usecase or they can use the model to deploy in the edge device such as Raspberry PI using the CLI tool which is detailed in the below file. The CLI tool contains the instructions on how to use the tool and how to deploy / run the models in different supported platforms. GitHub link for the CLI tool CLI usage steps: eurmlsdk_README (1).md 3853
Phase I - User Persona Flow
In the Phase - I we are planning to launch a model zoo website which gives different model details / metrics with respective executed platform information. The user persona who is going to be involved in Phase - I is a developer. Overall Flow Detailed Steps The user enters the Model Zoo website and the landing page displays the different domains and the different usecases under each domain. When the use selects the usecase the user will be directed to the model under the usecase . There will also be a global search page that provides several filters based on the use case , the applicable scenarios , the supported chipsets and the user can select any model based on the filters. The model based on the user’s search will be displayed as a card with the qualifiers for why the model has to be chosen. The qualifiers include the energy consumed by the model in Joules , the model release date , target usecases and the model download size (tflite). Based on the qualifiers the user can choose any model of his choice and under the model details page the model details with the download formats and the different versions of the models will be displayed for the user to download. The github code link and the research paper link from where the model was built would be present in the page. The user would also be given an insight into the model metrics with a summary of the profiling and a deeper dive into the layer wise model metrics based on the model profiling so that the user can best decide which model would suit their usecase. Once the user has downloaded the model the user can use the model for their usecase or they can use the model to deploy in the edge device such as Raspberry PI using the CLI tool which is detailed in the below file. The CLI tool contains the instructions on how to use the tool and how to deploy / run the models in different supported platforms. GitHub link for the CLI tool CLI usage steps: eurmlsdk_README.md 4141
###Human:tell me about Phase I - User Persona Flow ###assistant: In the Phase - I we are planning to launch a model zoo website which gives different model details / metrics with respective executed platform information. The user persona who is going to be involved in Phase - I is a developer. Overall Flow Detailed Steps The user enters the Model Zoo website and the landing page displays the different domains and the different usecases under each domain. When the use selects the usecase the user will be directed to the model under the usecase . There will also be a global search page that provides several filters based on the use case , the applicable scenarios , the supported chipsets and the user can select any model based on the filters. The model based on the user’s search will be displayed as a card with the qualifiers for why the model has to be chosen. The qualifiers include the energy consumed by the model in Joules , the model release date , target usecases and the model download size (tflite). Based on the qualifiers the user can choose any model of his choice and under the model details page the model details with the download formats and the different versions of the models will be displayed for the user to download. The github code link and the research paper link from where the model was built would be present in the page. The user would also be given an insight into the model metrics with a summary of the profiling and a deeper dive into the layer wise model metrics based on the model profiling so that the user can best decide which model would suit their usecase. Once the user has downloaded the model the user can use the model for their usecase or they can use the model to deploy in the edge device such as Raspberry PI using the CLI tool which is detailed in the below file. The CLI tool contains the instructions on how to use the tool and how to deploy / run the models in different supported platforms. GitHub link for the CLI tool CLI usage steps: eurmlsdk_README.md 4141
Build an image
Steps: (Reference link: https://synaptics-astra.github.io/doc/v/0.9.0/yocto.htmlhow-to-build-an-image) To start the container, use the following command line: To clone the repository within the build environment, use the following command: * To build an image execute the following commands: The resulting image can be found in
###Human:tell me about Build an image ###assistant: Steps: (Reference link: https://synaptics-astra.github.io/doc/v/0.9.0/yocto.htmlhow-to-build-an-image) To start the container, use the following command line: To clone the repository within the build environment, use the following command: * To build an image execute the following commands: The resulting image can be found in
Phase I - User Persona
Overall Flow Detailed Steps The user logs in to the Model Zoo website and the landing page displays the different domains and the different usecases under each domain. When the use selects the usecase the user will be directed to the model under the usecase . There will also be a global search page that provides several filters based on the use case , the applicable scenarios , the supported chipsets and the user can select any model based on the filters. The model based on the user’s search will be displayed as a card with the qualifiers for why the model has to be chosen. The qualifiers include the energy consumed by the model in Joules , the model release date , target usecases and the model download size (tflite). Based on the qualifiers the user can choose any model of his choice and under the model details page the model details with the download formats and the different versions of the models will be displayed for the user to download. The github code link and the research paper link from where the model was built would be present in the page. The user would also be given an insight into the model metrics with a summary of the profiling and a deeper dive into the layer wise model metrics based on the model profiling so that the user can best decide which model would suit their usecase. Once the user has downloaded the model the user can use the model for their usecase or they can use the model to deploy in the edge device such as Raspberry PI using the CLI tool which is detailed in the below file. The CLI tool contains the instructions on how to use the tool and how to deploy / run the models in different supported platforms. GitHub link for the CLI tool CLI usage steps: eurmlsdk_README.md 4141
###Human:tell me about Phase I - User Persona ###assistant: Overall Flow Detailed Steps The user logs in to the Model Zoo website and the landing page displays the different domains and the different usecases under each domain. When the use selects the usecase the user will be directed to the model under the usecase . There will also be a global search page that provides several filters based on the use case , the applicable scenarios , the supported chipsets and the user can select any model based on the filters. The model based on the user’s search will be displayed as a card with the qualifiers for why the model has to be chosen. The qualifiers include the energy consumed by the model in Joules , the model release date , target usecases and the model download size (tflite). Based on the qualifiers the user can choose any model of his choice and under the model details page the model details with the download formats and the different versions of the models will be displayed for the user to download. The github code link and the research paper link from where the model was built would be present in the page. The user would also be given an insight into the model metrics with a summary of the profiling and a deeper dive into the layer wise model metrics based on the model profiling so that the user can best decide which model would suit their usecase. Once the user has downloaded the model the user can use the model for their usecase or they can use the model to deploy in the edge device such as Raspberry PI using the CLI tool which is detailed in the below file. The CLI tool contains the instructions on how to use the tool and how to deploy / run the models in different supported platforms. GitHub link for the CLI tool CLI usage steps: eurmlsdk_README.md 4141
Glossary of Metrics
Layer: A layer in a deep learning model serves as a fundamental building block in the model’s architecture. The structure of the network is responsible for processing and transforming input data. Gradient: Gradient is simply a vector which gives the direction of maximum rate of change. By taking steps in that direction, we hope to reach our optimal solution. GFlops: GFLOPS stands for “Giga Floating-Point Operations per Second” and is a metric used to measure the computational performance of a computer system or a specific operation. It represents the number of floating-point operations performed per second, expressed in billions (giga) per second. Precision: (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: Precision=Relevant retrieved instances/All retrieved instances Recall: (also known as sensitivity) is the fraction of relevant instances that were retrieved. Written as a formula: Recall=Relevant retrieved instances/All relevant instances IoU: (also known as Jaccard Index) is a popular metric to measure localization accuracy and compute localization errors in object detection models. It calculates the amount of overlapping between two bounding boxes—a predicted bounding box and a ground truth bounding box. Receiver Operating Characteristics (ROC) Curve: stands for Receiver Operating Characteristics, and the ROC curve is the graphical representation of the effectiveness of the binary classification model. It plots the true positive rate (TPR) vs the false positive rate (FPR) at different classification thresholds. Area Under Curve (AOC) Curve: represents the area under the ROC curve. It measures the overall performance of the binary classification model. As both TPR and FPR range between 0 to 1, So, the area will always lie between 0 and 1, and A greater value of AUC denotes better model performance. Our main goal is to maximize this area in order to have the highest TPR and lowest FPR at the given threshold. The AUC measures the probability that the model will assign a randomly chosen positive instance a higher predicted probability compared to a randomly chosen negative instance. It represents the probability with which our model can distinguish between the two classes present in our target. Learning Rates: lr/pg0 typically refers to the learning rate for the backbone weights. lr/pg1 is usually associated with the learning rate for the YOLO layers' weights. lr/pg2 often represents the learning rate for any additional parameters, such as biases. Interquartile Range: "IQR" stands for "Interquartile Range." The Interquartile Range is a measure of statistical dispersion, specifically used to measure the spread of data points within a dataset. Dice Score: also called the Dice coefficient or Dice similarity coefficient, is a metric used to assess the overlap between a predicted segmentation and the ground truth (actual segmentation). formula A higher dice score indicates better performance by your segmentation model. Ops: The total number of operations to execute one model inference. MaCs: The total number of multiply-accumulate operations to execute one model inference CPU Cycles: The total number of CPU cycles to execute one model inference Energy: The total energy required to execute one model inference Time: Total estimated inference time
###Human:tell me about Glossary of Metrics ###assistant: Layer: A layer in a deep learning model serves as a fundamental building block in the model’s architecture. The structure of the network is responsible for processing and transforming input data. Gradient: Gradient is simply a vector which gives the direction of maximum rate of change. By taking steps in that direction, we hope to reach our optimal solution. GFlops: GFLOPS stands for “Giga Floating-Point Operations per Second” and is a metric used to measure the computational performance of a computer system or a specific operation. It represents the number of floating-point operations performed per second, expressed in billions (giga) per second. Precision: (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: Precision=Relevant retrieved instances/All retrieved instances Recall: (also known as sensitivity) is the fraction of relevant instances that were retrieved. Written as a formula: Recall=Relevant retrieved instances/All relevant instances IoU: (also known as Jaccard Index) is a popular metric to measure localization accuracy and compute localization errors in object detection models. It calculates the amount of overlapping between two bounding boxes—a predicted bounding box and a ground truth bounding box. Receiver Operating Characteristics (ROC) Curve: stands for Receiver Operating Characteristics, and the ROC curve is the graphical representation of the effectiveness of the binary classification model. It plots the true positive rate (TPR) vs the false positive rate (FPR) at different classification thresholds. Area Under Curve (AOC) Curve: represents the area under the ROC curve. It measures the overall performance of the binary classification model. As both TPR and FPR range between 0 to 1, So, the area will always lie between 0 and 1, and A greater value of AUC denotes better model performance. Our main goal is to maximize this area in order to have the highest TPR and lowest FPR at the given threshold. The AUC measures the probability that the model will assign a randomly chosen positive instance a higher predicted probability compared to a randomly chosen negative instance. It represents the probability with which our model can distinguish between the two classes present in our target. Learning Rates: lr/pg0 typically refers to the learning rate for the backbone weights. lr/pg1 is usually associated with the learning rate for the YOLO layers' weights. lr/pg2 often represents the learning rate for any additional parameters, such as biases. Interquartile Range: "IQR" stands for "Interquartile Range." The Interquartile Range is a measure of statistical dispersion, specifically used to measure the spread of data points within a dataset. Dice Score: also called the Dice coefficient or Dice similarity coefficient, is a metric used to assess the overlap between a predicted segmentation and the ground truth (actual segmentation). formula A higher dice score indicates better performance by your segmentation model. Ops: The total number of operations to execute one model inference. MaCs: The total number of multiply-accumulate operations to execute one model inference CPU Cycles: The total number of CPU cycles to execute one model inference Energy: The total energy required to execute one model inference Time: Total estimated inference time
Status - Week ending on 17/05
*Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: Our EMS server is now hosted at its new domain: ems.int.embedur.com. Internal Environments - QA/Dev: A tiny flash utility tool was created to flash Arduino-based tiny devices using AVRDude and tested for flashing certain preliminary AI models. Plan to run Superset and PySpark in 2 different Spark clusters Singtel Production: Debug & RCA: Kafka brokers from Singtel end could establish connection with VMware Production setup for message consumption Airtel Production: Devised a python script to find out the number of devices which were active for less than 1 day, 1 week, 10 days, 1 month and a year based on different last heard time slots. The offline devices - last heard value has been fetched and inserted in Redis for keeping track of active devices in embedUR. In rule engine extractor, we have started logging the reboot count of devices as per Airtel ask. Added this count in the summary report & also to upload the files to Airtel. Also projected the memory & CPU based reboots happened due to low traffic. Compared the reboot count of devices given by Airtel with the counts captured on embedUR. For some of the devices the reboot count is high - so need to validate by comparing it. Airtel asked for a weekly report which should contain the list of devices for keys - CPU, memory etc. which are repeated at least 3 times over a period of one week. Scheduled a CRON job for the script to trigger on every Monday. Taken around 10k Zyxel devices whose interference values are above 60 and repeated continuously for the last 7 days. Added the average interference value for the week, number of occurrence and the neighbor data. PySpark program is written to retrieve ORC files based on user-defined criteria like hour, table name, and desired columns. It then transforms these files into CSV format for further analysis. This PySpark application is designed to automate the creation of self-healing summary reports which contains devices repeated with the cases more than 3 days . These reports provide insights into critical system metrics such as CPU temperature ,memory usage, coverage hole, Interference, CPU usage aiding in self-healing capabilitiesn Tickets information: Closed Tickets Description DEVOPS-1150 EMS Domain Change AI-344 Flashing Firmware/Utility (Arduino Nano 33 BLE) AIR-3687 Python script to find out number of devices that were active for particular time period AIR-3691 Insert the offline devices data into Redis AIR-3694 Add the reboot count of devices & low traffic-based reboots counts AIR-3695 Find the reboot count of given devices on 13/05/24 AIR-3696 Self-healing weekly report AIR-3697 Fetch around 10k devices for Zyxel trail with neighbor data AIR-3689 ORC processing using Pyspark In-Progress Tickets Description ETA AI-275 Wrapper for deployment in device 24/05/2024 SMN-718 Data Migration Setup - Production environment 24/05/2024 SMN-739 Singtel Production Service Deployment 24/05/2024 AIR-3698 Airtel - Self Healing reports using Pyspark Job 21/05/2024 AIR-3692 Plan to run Thriftserver and PySpark together 24/05/2024 PTO & Vacation: Nil Plan for next week: Flashing Firmware To focus is on developing a C++ firmware flashing library for a tiny device, alongside investigating alternative methods for flashing devices. Testing and documentation findings will be pivotal to ensure the reliability of the solutions crafted. Data Migration Do Casandra Data Migration from Huawei CRS nodes to VMware CRS setup once connection has been established for all the 20 CRS nodes. Self-healing To create a weekly summary report which should contain the count of devices repeated at least 3 times over the period of one week. Reporting Infrastructure Deploy 2 different Spark clusters - For Superset and PySpark application execution in Airtel-Production environment Detailed Summary: Tiny Flash We have developed a firmware solution to facilitate flashing of Arduino IDE BLE, leveraging Opti JTAG, ISP, AVRDude, and a CLI wrapper, implemented in C++. Rigorous testing, including validation on the Nano 33 BLE, underscores the reliability of our implementation. The comprehensive update of documentation on the wiki enhances accessibility and understanding for fellow developers. The deployment involves quantizing the AI model with TinyML, converting it to C headers, then to hex via objcopy, and finally compiling and flashing it onto the device using JTAG. EMS Domain Change Our EMS server is now hosted at its new domain: ems.int.embedur.com. As part of this migration, we no longer have access to the DNS service (AWS Route 53) and hosted zone (embedur.com). Instead, the IT team has provided us with a new hosted zone: int.embedur.com. All internal servers have been transitioned to this new domain name. The reason behind this migration is to enable certificate updates and automation, which are crucial for our internal servers. With the DNS challenge, automated certificate renewal becomes possible, hence ensuring the security and smooth operation of our EMS server. Additionally, new records have been added in register.com for redirection in old domain names. Spark - Restructure Thriftserver utilizes the entire resources bound to the Spark cluster, not allowing other PySpark based scripts to run. We’ve now restructured the deployment model of Spark a bit - 2 different clusters for Superset and PySpark for efficient execution. Singtel VMware connection establishment Added SNAT, DNAT rules in VMware Production console, whitelisted Kafka brokers and added relevant routes in all the KFE servers for Singtel to establish connection with 12 Kafka brokers to consume messages. Airtel active devices less than particular time period Retrieved active list of devices in Airtel Production environment less than 24 hours, a week, 10 days, 1 week, 1 month, 6 months and a year for different sets of last heard time period. Redis - Data insertion Filtered the devices which are offline for more than 7 days in Cassandra dump and used the disconnect time column as last heard and inserted in Redis. Compared the Redis dump and Cassandra dump to find out the difference in numbers which were around 300k. Out of this 300k devices, 200k devices had values in disconnect time column - so used it as last heard and inserted in Redis. For the remaining 100k devices used the rest APIs to fetch the last heard timestamp. Self-healing summary Taken the reboot count of devices (on monitoring table) and written the devices details in csv as well. Considered devices which rebooted 10 times a day or more than that and recorded the maximum value. Taken the devices which are rebooted due to low traffic conditions and added it in Rebooted devices table. Reboot count comparison Airtel has given a set of devices - There are around 50k devices in this list, Filtered the devices which are present only in embedUR and the final count is around ~20k devices. For these devices captured the reboot count form orc files - table: "journal", type: "Accesspoint" & field_name: "erm_reboot_reasons_day_all". Finally attached these counts in a file which contains both TR and embedur reboot count. Self-healing weekly report Written a python script to fetch the repeated counts and the average value of the keys (CPU, memory etc.) for the week. On 13-05-24, we have sent the weekly report and on every Monday a scheduler has been placed to automatically trigger. Zyxel trail group Using a python script filtered the 10k devices (radio_24) which are repeated consecutively for last 7 days. Using Rest APIs neighbor data has been captured and added the necessary columns in the output file. ORC processing - PySpark Job The PySpark job reads data from ORC (Optimized Row Columnar) files, a data format optimized for efficient storage and retrieval of large datasets The job focuses on extracting data relevant to your specific needs by applying filters based on: Hour: It selects ORC files for a particular hour range (e.g., only files related to hour 475937). Table: It targets specific tables within the ORC files (e.g., a table named "cloud-devel.journal"). Columns: It extracts only the desired columns from the selected tables, reducing processing time and output size by focusing on the information you need. Once the data is filtered based on hour, table, and columns, the job converts it from the ORC format to CSV (Comma-Separated Values) format. CSV is a more human-readable and widely-used format, making it easier to import into various data analysis tools and spreadsheets. Efficiency: PySpark's distributed processing capabilities enable efficient handling of large ORC files. Accessibility: CSV format provides broader compatibility with various data analysis tools. Automation: The job streamlines the process, eliminating manual data extraction and conversion tasks, reducing errors and improving consistency. Depending on the complexity of your data and processing requirements, the PySpark job might involve additional steps like data cleaning, aggregation, or transformation before converting to CSV. Self Healing Reports - PySpark The application analyzes the data to identify devices experiencing the user cases (CPU temperature, usage , Memory usage etc) repeatedly over a specific timeframe (e.g., more than 3 days). The application utilizes the extracted data to generate summary reports. These reports likely include: Device ID Repeated Issue (e.g., CPU Temperature exceeding threshold) Duration of the issue (e.g., "More than 3 days") Additional relevant information (may vary)
###Human:tell me about Status - Week ending on 17/05 ###assistant: *Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: Our EMS server is now hosted at its new domain: ems.int.embedur.com. Internal Environments - QA/Dev: A tiny flash utility tool was created to flash Arduino-based tiny devices using AVRDude and tested for flashing certain preliminary AI models. Plan to run Superset and PySpark in 2 different Spark clusters Singtel Production: Debug & RCA: Kafka brokers from Singtel end could establish connection with VMware Production setup for message consumption Airtel Production: Devised a python script to find out the number of devices which were active for less than 1 day, 1 week, 10 days, 1 month and a year based on different last heard time slots. The offline devices - last heard value has been fetched and inserted in Redis for keeping track of active devices in embedUR. In rule engine extractor, we have started logging the reboot count of devices as per Airtel ask. Added this count in the summary report & also to upload the files to Airtel. Also projected the memory & CPU based reboots happened due to low traffic. Compared the reboot count of devices given by Airtel with the counts captured on embedUR. For some of the devices the reboot count is high - so need to validate by comparing it. Airtel asked for a weekly report which should contain the list of devices for keys - CPU, memory etc. which are repeated at least 3 times over a period of one week. Scheduled a CRON job for the script to trigger on every Monday. Taken around 10k Zyxel devices whose interference values are above 60 and repeated continuously for the last 7 days. Added the average interference value for the week, number of occurrence and the neighbor data. PySpark program is written to retrieve ORC files based on user-defined criteria like hour, table name, and desired columns. It then transforms these files into CSV format for further analysis. This PySpark application is designed to automate the creation of self-healing summary reports which contains devices repeated with the cases more than 3 days . These reports provide insights into critical system metrics such as CPU temperature ,memory usage, coverage hole, Interference, CPU usage aiding in self-healing capabilitiesn Tickets information: Closed Tickets Description DEVOPS-1150 EMS Domain Change AI-344 Flashing Firmware/Utility (Arduino Nano 33 BLE) AIR-3687 Python script to find out number of devices that were active for particular time period AIR-3691 Insert the offline devices data into Redis AIR-3694 Add the reboot count of devices & low traffic-based reboots counts AIR-3695 Find the reboot count of given devices on 13/05/24 AIR-3696 Self-healing weekly report AIR-3697 Fetch around 10k devices for Zyxel trail with neighbor data AIR-3689 ORC processing using Pyspark In-Progress Tickets Description ETA AI-275 Wrapper for deployment in device 24/05/2024 SMN-718 Data Migration Setup - Production environment 24/05/2024 SMN-739 Singtel Production Service Deployment 24/05/2024 AIR-3698 Airtel - Self Healing reports using Pyspark Job 21/05/2024 AIR-3692 Plan to run Thriftserver and PySpark together 24/05/2024 PTO & Vacation: Nil Plan for next week: Flashing Firmware To focus is on developing a C++ firmware flashing library for a tiny device, alongside investigating alternative methods for flashing devices. Testing and documentation findings will be pivotal to ensure the reliability of the solutions crafted. Data Migration Do Casandra Data Migration from Huawei CRS nodes to VMware CRS setup once connection has been established for all the 20 CRS nodes. Self-healing To create a weekly summary report which should contain the count of devices repeated at least 3 times over the period of one week. Reporting Infrastructure Deploy 2 different Spark clusters - For Superset and PySpark application execution in Airtel-Production environment Detailed Summary: Tiny Flash We have developed a firmware solution to facilitate flashing of Arduino IDE BLE, leveraging Opti JTAG, ISP, AVRDude, and a CLI wrapper, implemented in C++. Rigorous testing, including validation on the Nano 33 BLE, underscores the reliability of our implementation. The comprehensive update of documentation on the wiki enhances accessibility and understanding for fellow developers. The deployment involves quantizing the AI model with TinyML, converting it to C headers, then to hex via objcopy, and finally compiling and flashing it onto the device using JTAG. EMS Domain Change Our EMS server is now hosted at its new domain: ems.int.embedur.com. As part of this migration, we no longer have access to the DNS service (AWS Route 53) and hosted zone (embedur.com). Instead, the IT team has provided us with a new hosted zone: int.embedur.com. All internal servers have been transitioned to this new domain name. The reason behind this migration is to enable certificate updates and automation, which are crucial for our internal servers. With the DNS challenge, automated certificate renewal becomes possible, hence ensuring the security and smooth operation of our EMS server. Additionally, new records have been added in register.com for redirection in old domain names. Spark - Restructure Thriftserver utilizes the entire resources bound to the Spark cluster, not allowing other PySpark based scripts to run. We’ve now restructured the deployment model of Spark a bit - 2 different clusters for Superset and PySpark for efficient execution. Singtel VMware connection establishment Added SNAT, DNAT rules in VMware Production console, whitelisted Kafka brokers and added relevant routes in all the KFE servers for Singtel to establish connection with 12 Kafka brokers to consume messages. Airtel active devices less than particular time period Retrieved active list of devices in Airtel Production environment less than 24 hours, a week, 10 days, 1 week, 1 month, 6 months and a year for different sets of last heard time period. Redis - Data insertion Filtered the devices which are offline for more than 7 days in Cassandra dump and used the disconnect time column as last heard and inserted in Redis. Compared the Redis dump and Cassandra dump to find out the difference in numbers which were around 300k. Out of this 300k devices, 200k devices had values in disconnect time column - so used it as last heard and inserted in Redis. For the remaining 100k devices used the rest APIs to fetch the last heard timestamp. Self-healing summary Taken the reboot count of devices (on monitoring table) and written the devices details in csv as well. Considered devices which rebooted 10 times a day or more than that and recorded the maximum value. Taken the devices which are rebooted due to low traffic conditions and added it in Rebooted devices table. Reboot count comparison Airtel has given a set of devices - There are around 50k devices in this list, Filtered the devices which are present only in embedUR and the final count is around ~20k devices. For these devices captured the reboot count form orc files - table: "journal", type: "Accesspoint" & field_name: "erm_reboot_reasons_day_all". Finally attached these counts in a file which contains both TR and embedur reboot count. Self-healing weekly report Written a python script to fetch the repeated counts and the average value of the keys (CPU, memory etc.) for the week. On 13-05-24, we have sent the weekly report and on every Monday a scheduler has been placed to automatically trigger. Zyxel trail group Using a python script filtered the 10k devices (radio_24) which are repeated consecutively for last 7 days. Using Rest APIs neighbor data has been captured and added the necessary columns in the output file. ORC processing - PySpark Job The PySpark job reads data from ORC (Optimized Row Columnar) files, a data format optimized for efficient storage and retrieval of large datasets The job focuses on extracting data relevant to your specific needs by applying filters based on: Hour: It selects ORC files for a particular hour range (e.g., only files related to hour 475937). Table: It targets specific tables within the ORC files (e.g., a table named "cloud-devel.journal"). Columns: It extracts only the desired columns from the selected tables, reducing processing time and output size by focusing on the information you need. Once the data is filtered based on hour, table, and columns, the job converts it from the ORC format to CSV (Comma-Separated Values) format. CSV is a more human-readable and widely-used format, making it easier to import into various data analysis tools and spreadsheets. Efficiency: PySpark's distributed processing capabilities enable efficient handling of large ORC files. Accessibility: CSV format provides broader compatibility with various data analysis tools. Automation: The job streamlines the process, eliminating manual data extraction and conversion tasks, reducing errors and improving consistency. Depending on the complexity of your data and processing requirements, the PySpark job might involve additional steps like data cleaning, aggregation, or transformation before converting to CSV. Self Healing Reports - PySpark The application analyzes the data to identify devices experiencing the user cases (CPU temperature, usage , Memory usage etc) repeatedly over a specific timeframe (e.g., more than 3 days). The application utilizes the extracted data to generate summary reports. These reports likely include: Device ID Repeated Issue (e.g., CPU Temperature exceeding threshold) Duration of the issue (e.g., "More than 3 days") Additional relevant information (may vary)
Astral Co-pilot
Overview Astral Co-pilot is a recommendation engine and an assistant which provides the following functionalities. 1. *Support Recommendation system 1. Give recommendations to the L1/L2 support team for taking certain actions on problematic devices when the customer raises ticket. 2. Astral Assistant 1. Proactive maintenance / alert system 1. It monitor the device data / metrics and take proactive actions to the device through existing Self-healing services or alert the support team before in hand. 2. Customer Recommendation system 1. Give recommendations to the customers based on the device data to improve their user experience. Support Recommendation System The support recommendation system monitors the API requests from the L1 and L2 servers, then analyze the data / metrics of that devices by monitoring it and provide recommendations for solving the issue to L1 or L2 servers. For identifying the issue, the system looks for following details. 1. Runtime Data 2. Events Architecture Diagram Flow Currently the Airtel L1 and L2 team are using our REST API for gathering information related to devices like Coverage hole, Interference, Connected devices etc. Normally these API will be triggered from L1 or L2 server when they are looking into a customer raised ticket / issue. The L1 can trigger when they are in a call with customer and L2 can raise when they check the tickets forwarded from L1 team. As mentioned in the above diagram, the L1 / L2 server will trigger the API call and it will get the data from existing REST service. From the APISIX it will be redirected to a REST replica based on internal load balancing strategies. Today, we are having a HTTP logger service which listens to the API requests and forwards the requests to the monitoring service through Apache Kafka. The monitoring service (known as ==runtime update service== in Airtel production)will monitor the CPE by gathering runtime data from the device for certain interval by sending the send update requests through the REST service. Currently we have this setup to provide latest data to the support team for their analysis. We are going to use the same setup for our recommendation engine but here ==changes will be made in the monitoring service to push the device details to another Kafka topic== after monitoring the device for a certain interval. Recommendation Services The recommendation service subscribes to the Kafka topic to which the monitoring mode publishes after finishing the monitoring on respective device (5 or 10 iterations based on the configuration). Then it will get the device’s monitored runtime data from the journal table and also it will get the Critical and Warning events specific to that device from events table. Then the runtime data will be parsed and send to Recommendation AI service API for inference. The AI service will provide the classification class based on the device data. Based on the classification, the recommendation service will get the probable causes for that problem and generate the recommendation response (a sentence like ChatGPT) to Airtel support team. Since this is an asynchronous process we need to send the response / recommendation asynchronously to the L1 / L2 system after our prediction. Below are the three different ways as mentioned in the architecture diagram. *==Recommendation REST API== Astral can provide a recommendation API through the existing REST interface and the L1 or L2 team can call the recommendation API after 2-3 minutes of the initial trigger. The API will provide the recommendations / actions for the device. ==Kafka streaming== The recommendation service can publish the recommendations / actions to a specific Kafka topic and a service from Support team end can subscribe to that topic and get the recommended actions. Astral can open an external listener on the existing Kafka (for that particular topic) for support service to connect. *==Webhook== A webhook will be created and the recommended actions are triggered through the webhook. This is the last option we can think of implementing for this feature. :::info Kafka Streaming is a better and easy option to go, but it requires engineers with the background from Airtel end. Since they are already using data lake and consuming our ORC files we can propose this option for implementation. ::: Astral Assistant There are two different features the Astral virtual assistant will provide to support and end customers. Proactive Maintenance / Alert System The Proactive maintenance system will monitor the runtime data from the device and take necessary actions through the Self-Healing functionality. For some of the identified actions from AI prediction based on runtime data we can trigger that action directly to device. Or the system will provide alert to the support team for fixing the issue before the customer coming to know or they raise ticket. :::info We will come up with the list of actions which we can trigger through self-service based on the impact to customer and agreement with Airtel. ::: Customer Recommendation System The customer recommendation system will provide valuable recommendations to the end users of Airtel for improving their user experience. For example, if Astral is receiving multiple client authentication / authorization events from a customer, then our recommendation system will trigger a notification to notify the user that multiple failure authentication attempts has been made and the steps to reset the password through Thanks App. Need for a new system Currently Airtel is using the hourly generated ORC files for analyzing the working of its devices and using the REST API for identifying any anomalies on the devices after the customer raised a complaint. From the hourly ORC files, they analyze and take actions like changing the channel through API if the interference value is high etc. There are reasons why we need a better system for identifying the issues compared to the current system. The hourly ORC processing is a batch process and currently it is being ==executed for every 2-3 hours by Airtel team==. The L2 team is currently using the REST API and analyzing the device metrics ==after customer raised a complaint and it got transferred from L1 to L2 team==. We need a system which can proactively take actions and provide recommendations in real time. Architecture Diagram :::info Both Support recommendation and assistant can be showcased in a single diagram. But mentioned as two different diagrams for better understanding. Everything can be implemented at the same time. ::: Flow In the existing setup, the device runtime and event data will be streamed to ==Kafka’s optRuntime and optEvent topics==. The assistant service will subscribe to that topics and receive all the streamed data in real time. The runtime will be parsed with required parameters and it will be passed to recommendation AI engine to analyze whether there is any issue and provide a recommendation class as an output based on the issue. The recommendation engine will provide NA class in case of no issues in the device. The assistant service should be connected to the Cassandra and Redis cache for saving required data and caching specific occurrence of events / metrics of a device for quicker retrieval in real time. We will also use the same monitoring service for triggering any live monitoring on the device if required. After receiving the recommendations from AI engine the assistant service will check and notify the L1 or L2 support team through different approaches like ==Kafka streaming, Webhook or REST API interface== as we discussed in previous section. Milestones Milestone Task Description ETA Status ==Phase - I== 1 ==AI model with probable causes== ==Provide daily call-in analysis report to Airtel== ==29th May== ==Planned== Call-in analysis service design Design and finalize both service and model 23rd Apr Recommendation model Development of Recommendation model 10th May Recommendation Service Development of Recommendation Service 17th May QA testing Completion Testing recommendation service and models 24th May Production Deployment Deployment in Production environment 29th May ==Phase - II== 2 ==Airtel Co-pilot Recommendations & Notifications Service== ==Planning & Finalizing the changes== ==TBD== ==Planned== Recommendation AI Model - Enhancement Provides a recommendation in words (Like ChatGPT) Recommendation / Assistant Service plan Detailed technical plan for the recommendation micro service REST API changes Detailed technical plan for REST API changes Team to Work** Vimal, Bharath, Jyotsna, Shan(50%)
###Human:tell me about Astral Co-pilot ###assistant: Overview Astral Co-pilot is a recommendation engine and an assistant which provides the following functionalities. 1. *Support Recommendation system 1. Give recommendations to the L1/L2 support team for taking certain actions on problematic devices when the customer raises ticket. 2. Astral Assistant 1. Proactive maintenance / alert system 1. It monitor the device data / metrics and take proactive actions to the device through existing Self-healing services or alert the support team before in hand. 2. Customer Recommendation system 1. Give recommendations to the customers based on the device data to improve their user experience. Support Recommendation System The support recommendation system monitors the API requests from the L1 and L2 servers, then analyze the data / metrics of that devices by monitoring it and provide recommendations for solving the issue to L1 or L2 servers. For identifying the issue, the system looks for following details. 1. Runtime Data 2. Events Architecture Diagram Flow Currently the Airtel L1 and L2 team are using our REST API for gathering information related to devices like Coverage hole, Interference, Connected devices etc. Normally these API will be triggered from L1 or L2 server when they are looking into a customer raised ticket / issue. The L1 can trigger when they are in a call with customer and L2 can raise when they check the tickets forwarded from L1 team. As mentioned in the above diagram, the L1 / L2 server will trigger the API call and it will get the data from existing REST service. From the APISIX it will be redirected to a REST replica based on internal load balancing strategies. Today, we are having a HTTP logger service which listens to the API requests and forwards the requests to the monitoring service through Apache Kafka. The monitoring service (known as ==runtime update service== in Airtel production)will monitor the CPE by gathering runtime data from the device for certain interval by sending the send update requests through the REST service. Currently we have this setup to provide latest data to the support team for their analysis. We are going to use the same setup for our recommendation engine but here ==changes will be made in the monitoring service to push the device details to another Kafka topic== after monitoring the device for a certain interval. Recommendation Services The recommendation service subscribes to the Kafka topic to which the monitoring mode publishes after finishing the monitoring on respective device (5 or 10 iterations based on the configuration). Then it will get the device’s monitored runtime data from the journal table and also it will get the Critical and Warning events specific to that device from events table. Then the runtime data will be parsed and send to Recommendation AI service API for inference. The AI service will provide the classification class based on the device data. Based on the classification, the recommendation service will get the probable causes for that problem and generate the recommendation response (a sentence like ChatGPT) to Airtel support team. Since this is an asynchronous process we need to send the response / recommendation asynchronously to the L1 / L2 system after our prediction. Below are the three different ways as mentioned in the architecture diagram. *==Recommendation REST API== Astral can provide a recommendation API through the existing REST interface and the L1 or L2 team can call the recommendation API after 2-3 minutes of the initial trigger. The API will provide the recommendations / actions for the device. ==Kafka streaming== The recommendation service can publish the recommendations / actions to a specific Kafka topic and a service from Support team end can subscribe to that topic and get the recommended actions. Astral can open an external listener on the existing Kafka (for that particular topic) for support service to connect. *==Webhook== A webhook will be created and the recommended actions are triggered through the webhook. This is the last option we can think of implementing for this feature. :::info Kafka Streaming is a better and easy option to go, but it requires engineers with the background from Airtel end. Since they are already using data lake and consuming our ORC files we can propose this option for implementation. ::: Astral Assistant There are two different features the Astral virtual assistant will provide to support and end customers. Proactive Maintenance / Alert System The Proactive maintenance system will monitor the runtime data from the device and take necessary actions through the Self-Healing functionality. For some of the identified actions from AI prediction based on runtime data we can trigger that action directly to device. Or the system will provide alert to the support team for fixing the issue before the customer coming to know or they raise ticket. :::info We will come up with the list of actions which we can trigger through self-service based on the impact to customer and agreement with Airtel. ::: Customer Recommendation System The customer recommendation system will provide valuable recommendations to the end users of Airtel for improving their user experience. For example, if Astral is receiving multiple client authentication / authorization events from a customer, then our recommendation system will trigger a notification to notify the user that multiple failure authentication attempts has been made and the steps to reset the password through Thanks App. Need for a new system Currently Airtel is using the hourly generated ORC files for analyzing the working of its devices and using the REST API for identifying any anomalies on the devices after the customer raised a complaint. From the hourly ORC files, they analyze and take actions like changing the channel through API if the interference value is high etc. There are reasons why we need a better system for identifying the issues compared to the current system. The hourly ORC processing is a batch process and currently it is being ==executed for every 2-3 hours by Airtel team==. The L2 team is currently using the REST API and analyzing the device metrics ==after customer raised a complaint and it got transferred from L1 to L2 team==. We need a system which can proactively take actions and provide recommendations in real time. Architecture Diagram :::info Both Support recommendation and assistant can be showcased in a single diagram. But mentioned as two different diagrams for better understanding. Everything can be implemented at the same time. ::: Flow In the existing setup, the device runtime and event data will be streamed to ==Kafka’s optRuntime and optEvent topics==. The assistant service will subscribe to that topics and receive all the streamed data in real time. The runtime will be parsed with required parameters and it will be passed to recommendation AI engine to analyze whether there is any issue and provide a recommendation class as an output based on the issue. The recommendation engine will provide NA class in case of no issues in the device. The assistant service should be connected to the Cassandra and Redis cache for saving required data and caching specific occurrence of events / metrics of a device for quicker retrieval in real time. We will also use the same monitoring service for triggering any live monitoring on the device if required. After receiving the recommendations from AI engine the assistant service will check and notify the L1 or L2 support team through different approaches like ==Kafka streaming, Webhook or REST API interface== as we discussed in previous section. Milestones Milestone Task Description ETA Status ==Phase - I== 1 ==AI model with probable causes== ==Provide daily call-in analysis report to Airtel== ==29th May== ==Planned== Call-in analysis service design Design and finalize both service and model 23rd Apr Recommendation model Development of Recommendation model 10th May Recommendation Service Development of Recommendation Service 17th May QA testing Completion Testing recommendation service and models 24th May Production Deployment Deployment in Production environment 29th May ==Phase - II== 2 ==Airtel Co-pilot Recommendations & Notifications Service== ==Planning & Finalizing the changes== ==TBD== ==Planned== Recommendation AI Model - Enhancement Provides a recommendation in words (Like ChatGPT) Recommendation / Assistant Service plan Detailed technical plan for the recommendation micro service REST API changes Detailed technical plan for REST API changes Team to Work** Vimal, Bharath, Jyotsna, Shan(50%)
Status - Week ending on 10/05
*Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: After thorough investigation, it was found that most builds in the GitLab Pipeline were failing due to reported missing or removed directories. Subsequent to a GitLab upgrade, these issues have been resolved. The default configuration of @superset package appears to be unstable. To ensure a successful build (ecloud_www), manual reconfiguration of certain parts of the code was necessary. As a workaround, the node_modules directory has been made persistent in GitLab Runner. A new policy and role have been created, attaching the policy to the developer group and granting them access to AWS Cognito services. Our development team can now leverage these services effectively. Strict commit structure, enforced via server-side hooks on GitLab for repositories Geo Trace, Recommendation Service. Gitlab production upgrade was completed successfully from 13.1.11 to 13.8.8. A new Jenkins job was implemented to streamline the build process for the DASAN project's customer release image. Internal Environments - QA/Dev: A CatBoost ML model trained on access point runtime logs predicts call initiation behavior, providing interpretability through SHAP scores and Flask integration for seamless data handling and result interpretation with local dictionary correlation. Singtel Production: Communication has been established between CRS nodes of Huawei and VMware environment except CRS-5 and CRS-8 node. Scram users, consumer group and ACLs have been created for runtime streaming and push notification service topics in production environment. Successful consumption of messages in Staging environment after increasing the partition count to “3” for all the topics. Increased the number of slots (configuration change) in “flink-erm” service to fix slot allocation issue within a particular time period. Airtel Production: The configuration changes addressed the issue of Superset queries running solely on the Spark master. This optimization distributes the workload across available workers, enhancing overall performance. Implemented Role-Based Access Control (RBAC) in Superset to restrict customer views to specific roles. Currently we are uploading the repeated cases files based on models. But Airtel wants the output files to be based on the keys like memory usage, CPU usage etc. Also split the interference repeated cases count based on both the bands - radio_5 & radio_24. To find out the devices which got repeated for at least 10 days in the self-healing reports from 24th March to 24th April (1 month) for the CPU, Memory, CPU temperature, Coverage hole (at least 15 days) & Interference (only for band 2.4 GHz) along with its value (memory usage, CPU usage etc. of the particular devices) and the timestamp. To find the last 24 hours online devices in Astral and the number of clients connected to it and also to find the devices which are online for the last 30 days in Astral system.n Tickets information: Closed Tickets Description DEVOPS-1143 Support for certain explicit modules for ecloud_www DEVOPS-1144 Investigate & Fix GitLab Runner build failures. DEVOPS-1145 Policy & Role for AWS Cognito AIR-3643 Create & train model for providing top 5 recommended action DEVOPS-1115 Enforce strict commit Structure AIR-3673 Superset UI bases Queries using only Spark-master for execution DEVOPS-1146 Upgrade Gitlab from 13.1.11 to 13.8.8 in Test machine DEVOPS-1147 Upgrade Gitlab from 13.1.11 to 13.8.8 in Production DEVOPS-1148 To setup Jenkins build for DASAN customer image AIR-3680 Create key-wise files for repeated cases AIR-3681 Find the repeated for at least 10 days along with its value & timestamp AIR-3682 Find the online devices & clients connected to it In-Progress Tickets Description ETA AI-275 Wrapper for deployment in device 24/05/2024 AIR-3666 To work RBAC on Superset UI for Customer view 13/05/2024 SMN-718 Data Migration Setup - Production environment 17/05/2024 SMN-739 Singtel Production Service Deployment 17/05/2024 SMN-741 Flink Services Restart 17/05/2024 PTO & Vacation: Plan for next week: Model Zoo: Brainstorm ideas for a Model Park, including its API/Wrapper design, functionality, and the feasibility of the backend and development architecture. Dry run with a compatible device such as Arduino nano 33 BLE. Caddy: To take leverage of caddy proxy which we have priorly developed and deploy on internal servers such as EMS, WIKI and LMS for automatic cert renewal. Gitlab Upgrade: Upgrade both the Gitlab Test machine and Production machine to version 13.12.15 Data Migration: Validate Cassandra Data Migration by taking snapshot and incremental backup for one of the tables and ensure if the load gets reflected in VMware setup. Self-healing Weekly reports: Airtel asked for the weekly reports which should contain the devices which got repeated for at least 3 days over a period of one week and the devices which got rebooted more than 10 times a day. ORC based processing using Spark Leverage Spark multiple executors to speed up the process of data extraction/processing of the ORC files. Detailed Summary: Recommendation Service A recommendation ML model has been developed using the CatBoost machine learning algorithm to forecast whether a customer would initiate a call, leveraging their access point's runtime logs. The model was trained on a dataset comprising a set of distinct fields. By analyzing runtime logs, the model aimed to discern patterns indicative of call initiation behavior. The extensive dataset ensured a comprehensive exploration of potential predictors. The model also extracted the top 5 field indexes along with their SHAP scores, offering insights into the prediction rationale. This approach facilitated interpretability and understanding of the model's reasoning process to a certain extent. The main script was developed served as a singular hub, employing Flask to orchestrate model operations. It accepted data in JSON format via a POST method, furnishing parsed and indexed model outputs. Additionally, it correlated the results with top field indexes retrieved from a local dictionary, enhancing result interpretation and usability. Self-healing Feature - Reporting We configured Spark using in This directed Spark to use the designated master node for query execution. Superset now leverages Spark workers when executing queries through its UI. This distributes the workload across available workers, improving performance However, the Spark Thrift Server (used for communication between Superset and Spark) currently utilizes all CPU cores allocated to the workers. If each worker has 1 core and 3 workers are present, the Thrift Server consumes a total of 3 cores. This can potentially limit the available processing power for actual query execution on the workers themselves Enhanced Superset security by implementing RBAC for customer views. This ensures only authorized roles can access sensitive data. Gitlab Upgrade Validation was conducted following the backup and restoration process on the test machine. The production machine underwent an upgrade to version 13.8.8 , and both pre and post-validation procedures were carried out, subsequently updating the Wiki. Jenkins - DASAN The new Jenkins job offers a flexible approach to building DASA customer release images, ensuring all project-specific requirements and archives are incorporated. Singtel Production Migration: Except CRS-5 and CRS-8 nodes in Huawei cloud, we could establish communication with VMware setup for cassandra data migration to happen. Increased the partition count to “3” for all topics in Staging environment and third-party consumers could successfully connect to Kafka brokers and consume messages. Prepared a Cassandra Data Migration plan (day-wise manner) as per customer’s ask. Created scram user, consumer groups, ACLs in production environment for customer to consume messages. HAproxy services, flink-erm and flink-erm-tm has been deployed successfully in respective VMware instances. Self-healing summary Report: Made necessary changes to the shell script so that the output files will be based on models instead of keys. Also, the interference value count on the repeated cases table has been alerted to show both the band counts. Repeated cases: Written a python script to take the key wise (like memory, CPU etc.) csv files separately as input files and removed the duplicates which got repeated on the same day. Then compiled all the 30-day files and took the devices where number of occurrences is greater than or equal to 10 days or 15 days according to the data condition given by Airtel. Online devices & Client connection: Both the data has been extracted from the orc files. For getting the client connection the filed - "erm_accesspoint_stats_clients" on table journal and type Accesspoint is used. * Removed the duplicates & dropped null values for both the asks and the files are shared to Airtel.
###Human:tell me about Status - Week ending on 10/05 ###assistant: *Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: After thorough investigation, it was found that most builds in the GitLab Pipeline were failing due to reported missing or removed directories. Subsequent to a GitLab upgrade, these issues have been resolved. The default configuration of @superset package appears to be unstable. To ensure a successful build (ecloud_www), manual reconfiguration of certain parts of the code was necessary. As a workaround, the node_modules directory has been made persistent in GitLab Runner. A new policy and role have been created, attaching the policy to the developer group and granting them access to AWS Cognito services. Our development team can now leverage these services effectively. Strict commit structure, enforced via server-side hooks on GitLab for repositories Geo Trace, Recommendation Service. Gitlab production upgrade was completed successfully from 13.1.11 to 13.8.8. A new Jenkins job was implemented to streamline the build process for the DASAN project's customer release image. Internal Environments - QA/Dev: A CatBoost ML model trained on access point runtime logs predicts call initiation behavior, providing interpretability through SHAP scores and Flask integration for seamless data handling and result interpretation with local dictionary correlation. Singtel Production: Communication has been established between CRS nodes of Huawei and VMware environment except CRS-5 and CRS-8 node. Scram users, consumer group and ACLs have been created for runtime streaming and push notification service topics in production environment. Successful consumption of messages in Staging environment after increasing the partition count to “3” for all the topics. Increased the number of slots (configuration change) in “flink-erm” service to fix slot allocation issue within a particular time period. Airtel Production: The configuration changes addressed the issue of Superset queries running solely on the Spark master. This optimization distributes the workload across available workers, enhancing overall performance. Implemented Role-Based Access Control (RBAC) in Superset to restrict customer views to specific roles. Currently we are uploading the repeated cases files based on models. But Airtel wants the output files to be based on the keys like memory usage, CPU usage etc. Also split the interference repeated cases count based on both the bands - radio_5 & radio_24. To find out the devices which got repeated for at least 10 days in the self-healing reports from 24th March to 24th April (1 month) for the CPU, Memory, CPU temperature, Coverage hole (at least 15 days) & Interference (only for band 2.4 GHz) along with its value (memory usage, CPU usage etc. of the particular devices) and the timestamp. To find the last 24 hours online devices in Astral and the number of clients connected to it and also to find the devices which are online for the last 30 days in Astral system.n Tickets information: Closed Tickets Description DEVOPS-1143 Support for certain explicit modules for ecloud_www DEVOPS-1144 Investigate & Fix GitLab Runner build failures. DEVOPS-1145 Policy & Role for AWS Cognito AIR-3643 Create & train model for providing top 5 recommended action DEVOPS-1115 Enforce strict commit Structure AIR-3673 Superset UI bases Queries using only Spark-master for execution DEVOPS-1146 Upgrade Gitlab from 13.1.11 to 13.8.8 in Test machine DEVOPS-1147 Upgrade Gitlab from 13.1.11 to 13.8.8 in Production DEVOPS-1148 To setup Jenkins build for DASAN customer image AIR-3680 Create key-wise files for repeated cases AIR-3681 Find the repeated for at least 10 days along with its value & timestamp AIR-3682 Find the online devices & clients connected to it In-Progress Tickets Description ETA AI-275 Wrapper for deployment in device 24/05/2024 AIR-3666 To work RBAC on Superset UI for Customer view 13/05/2024 SMN-718 Data Migration Setup - Production environment 17/05/2024 SMN-739 Singtel Production Service Deployment 17/05/2024 SMN-741 Flink Services Restart 17/05/2024 PTO & Vacation: Plan for next week: Model Zoo: Brainstorm ideas for a Model Park, including its API/Wrapper design, functionality, and the feasibility of the backend and development architecture. Dry run with a compatible device such as Arduino nano 33 BLE. Caddy: To take leverage of caddy proxy which we have priorly developed and deploy on internal servers such as EMS, WIKI and LMS for automatic cert renewal. Gitlab Upgrade: Upgrade both the Gitlab Test machine and Production machine to version 13.12.15 Data Migration: Validate Cassandra Data Migration by taking snapshot and incremental backup for one of the tables and ensure if the load gets reflected in VMware setup. Self-healing Weekly reports: Airtel asked for the weekly reports which should contain the devices which got repeated for at least 3 days over a period of one week and the devices which got rebooted more than 10 times a day. ORC based processing using Spark Leverage Spark multiple executors to speed up the process of data extraction/processing of the ORC files. Detailed Summary: Recommendation Service A recommendation ML model has been developed using the CatBoost machine learning algorithm to forecast whether a customer would initiate a call, leveraging their access point's runtime logs. The model was trained on a dataset comprising a set of distinct fields. By analyzing runtime logs, the model aimed to discern patterns indicative of call initiation behavior. The extensive dataset ensured a comprehensive exploration of potential predictors. The model also extracted the top 5 field indexes along with their SHAP scores, offering insights into the prediction rationale. This approach facilitated interpretability and understanding of the model's reasoning process to a certain extent. The main script was developed served as a singular hub, employing Flask to orchestrate model operations. It accepted data in JSON format via a POST method, furnishing parsed and indexed model outputs. Additionally, it correlated the results with top field indexes retrieved from a local dictionary, enhancing result interpretation and usability. Self-healing Feature - Reporting We configured Spark using in This directed Spark to use the designated master node for query execution. Superset now leverages Spark workers when executing queries through its UI. This distributes the workload across available workers, improving performance However, the Spark Thrift Server (used for communication between Superset and Spark) currently utilizes all CPU cores allocated to the workers. If each worker has 1 core and 3 workers are present, the Thrift Server consumes a total of 3 cores. This can potentially limit the available processing power for actual query execution on the workers themselves Enhanced Superset security by implementing RBAC for customer views. This ensures only authorized roles can access sensitive data. Gitlab Upgrade Validation was conducted following the backup and restoration process on the test machine. The production machine underwent an upgrade to version 13.8.8 , and both pre and post-validation procedures were carried out, subsequently updating the Wiki. Jenkins - DASAN The new Jenkins job offers a flexible approach to building DASA customer release images, ensuring all project-specific requirements and archives are incorporated. Singtel Production Migration: Except CRS-5 and CRS-8 nodes in Huawei cloud, we could establish communication with VMware setup for cassandra data migration to happen. Increased the partition count to “3” for all topics in Staging environment and third-party consumers could successfully connect to Kafka brokers and consume messages. Prepared a Cassandra Data Migration plan (day-wise manner) as per customer’s ask. Created scram user, consumer groups, ACLs in production environment for customer to consume messages. HAproxy services, flink-erm and flink-erm-tm has been deployed successfully in respective VMware instances. Self-healing summary Report: Made necessary changes to the shell script so that the output files will be based on models instead of keys. Also, the interference value count on the repeated cases table has been alerted to show both the band counts. Repeated cases: Written a python script to take the key wise (like memory, CPU etc.) csv files separately as input files and removed the duplicates which got repeated on the same day. Then compiled all the 30-day files and took the devices where number of occurrences is greater than or equal to 10 days or 15 days according to the data condition given by Airtel. Online devices & Client connection: Both the data has been extracted from the orc files. For getting the client connection the filed - "erm_accesspoint_stats_clients" on table journal and type Accesspoint is used. * Removed the duplicates & dropped null values for both the asks and the files are shared to Airtel.
Installation
WSL INSTALL Run Win. Powershell as administrator wsl --install wsl --update wsl --install -d ubuntu Set username and password Open Terminal -> mkdir Workspace -> cd Workspace If Error occurs in any of the above steps : 1. Go to settings -> apps -> uninstall Ubuntu, Window Subsystem for Linux. Search for Optional Features -> More windows features -> DISABLE => HyperV, Virtual Machine, WSL 1. Again enable all the above 3.ENABLE => HyperV, Virtual Machine, WSL . Then,RESTART. SSH KEY GENERATION ssh-keygen cd .ssh cat the .pub file -> copy the content and paste it in gitlab -> settings -> add the key Clone the respective project directories Install Docker $ curl -fsSL https://get.docker.com/ -o get-docker.sh $ sudo sh get-docker.sh Set sudo privilege $ sudo usermod -aG docker $USER * Restart terminal Add registry to insecure registry list $ sudo vi /etc/docker/daemon.json` Add the following content { "insecure-registries": ["http://34.194.193.152:5000"] } Restart docker service $ sudo service docker restart Docker login $ docker login http://34.194.193.152:5000 Username : dockeruser Pwd : dockeruser@123
###Human:tell me about Installation ###assistant: WSL INSTALL Run Win. Powershell as administrator wsl --install wsl --update wsl --install -d ubuntu Set username and password Open Terminal -> mkdir Workspace -> cd Workspace If Error occurs in any of the above steps : 1. Go to settings -> apps -> uninstall Ubuntu, Window Subsystem for Linux. Search for Optional Features -> More windows features -> DISABLE => HyperV, Virtual Machine, WSL 1. Again enable all the above 3.ENABLE => HyperV, Virtual Machine, WSL . Then,RESTART. SSH KEY GENERATION ssh-keygen cd .ssh cat the .pub file -> copy the content and paste it in gitlab -> settings -> add the key Clone the respective project directories Install Docker $ curl -fsSL https://get.docker.com/ -o get-docker.sh $ sudo sh get-docker.sh Set sudo privilege $ sudo usermod -aG docker $USER * Restart terminal Add registry to insecure registry list $ sudo vi /etc/docker/daemon.json` Add the following content { "insecure-registries": ["http://34.194.193.152:5000"] } Restart docker service $ sudo service docker restart Docker login $ docker login http://34.194.193.152:5000 Username : dockeruser Pwd : dockeruser@123
Tiny ML Summit
TinyML summit was a 4 day event: Day 1 was strategic partner meeting where we discussed the future of tinyML and how to grow the organization and get more value from it. Day 2 was a research symposium with the full day being research paper presentations pushing tinyML forward. This is where we would have presented our paper if i had been selected. Interesting topics: 1. tinyML benchmarking with MLperf tiny especially related to TOPS / power. We found that some chip architectures like GreenWaves RISC-V based MAP5 architecture are leaders in this space. 2. A new pytorch being promted by Meta/ARM is being pushed called executorch. It is intended to make tinyML pytorch models work better. ARM is pushing to move their model zoo source to executorch 3. There is a focus on expanding all of the processing power on IoT devices to push AI goals as IoT CPUs have more compute power combined than all Nvidia graphics cards produced today. 4. Meta had a good presentation on contextual AI. Essentially today every IoT/wearable device is its own intelligence island and they don't work together except for where companies like Apple create the products to work as one. This presentation talked about IoT/wearable devices being aware of each other and working with AI cooperatively to make user experience more seamless Days 3 and 4 were mixtures of presentations by companies such as ARM, STMicro, Synaptics, etc on their view of tinyML technologies and new advancements as well as meet and greet across all companies and tinyML visitors. Here we got to showcase our RPI5 demo running YOLOv8 at 1 FPS on a fully loaded CPU versus Synaptics SR110 running YOLACT at 10 FPS as a comparison. There was a lot of interest by chip companies such as NXP/Infineon/Silabs/etc in the solution as well as us talking to Amazon Blink, Sonos, Sony Camera Vision, Seeed, and others. ARM also introduced us and our solution at their presentation on the state of AI for the ARM ecosystem. And from this they are interested is us helping on their model zoo as well as demos of AI on their chips/simulation platform.
###Human:tell me about Tiny ML Summit ###assistant: TinyML summit was a 4 day event: Day 1 was strategic partner meeting where we discussed the future of tinyML and how to grow the organization and get more value from it. Day 2 was a research symposium with the full day being research paper presentations pushing tinyML forward. This is where we would have presented our paper if i had been selected. Interesting topics: 1. tinyML benchmarking with MLperf tiny especially related to TOPS / power. We found that some chip architectures like GreenWaves RISC-V based MAP5 architecture are leaders in this space. 2. A new pytorch being promted by Meta/ARM is being pushed called executorch. It is intended to make tinyML pytorch models work better. ARM is pushing to move their model zoo source to executorch 3. There is a focus on expanding all of the processing power on IoT devices to push AI goals as IoT CPUs have more compute power combined than all Nvidia graphics cards produced today. 4. Meta had a good presentation on contextual AI. Essentially today every IoT/wearable device is its own intelligence island and they don't work together except for where companies like Apple create the products to work as one. This presentation talked about IoT/wearable devices being aware of each other and working with AI cooperatively to make user experience more seamless Days 3 and 4 were mixtures of presentations by companies such as ARM, STMicro, Synaptics, etc on their view of tinyML technologies and new advancements as well as meet and greet across all companies and tinyML visitors. Here we got to showcase our RPI5 demo running YOLOv8 at 1 FPS on a fully loaded CPU versus Synaptics SR110 running YOLACT at 10 FPS as a comparison. There was a lot of interest by chip companies such as NXP/Infineon/Silabs/etc in the solution as well as us talking to Amazon Blink, Sonos, Sony Camera Vision, Seeed, and others. ARM also introduced us and our solution at their presentation on the state of AI for the ARM ecosystem. And from this they are interested is us helping on their model zoo as well as demos of AI on their chips/simulation platform.
Design & Architecture
References Cloud Architecture v1.1: Claro Demo : 1. For 3 Million Devices, with data sending period 2 hours, we ingest ~1Mil messages every minute.
###Human:tell me about Design & Architecture ###assistant: References Cloud Architecture v1.1: Claro Demo : 1. For 3 Million Devices, with data sending period 2 hours, we ingest ~1Mil messages every minute.
MLPerf Tiny
Understanding MLPerf benchmark framework Detailed information is provided in the following PDF document MLPerf_Tiny_Benchmark_2106.07597v4.pdf 908757 Quick summary is to integrate the following benchmark models and calculate the metrics mentioned below. Metrics: 1. Performance mode 1. Latency (time taken per inference) or Throughput (number of inferences per second) 2. Top-1 (Accuracy) 3. AUC 1. The AUC metric calculates the area under the ROC curve. A larger area under the curve signifies better overall ability to separate the positive and negative classes. 2. AUC metric is a powerful tool for assessing the performance of binary classification models. It offers a more informative measure of a model's ability to distinguish between positive and negative cases compared to just accuracy. 2. Energy Mode 1. Energy consumed per inference Performance Mode Energy Mode Connects to Host PC Electrically isolated from Host PC Talks directly to the Runner Talks directly to IO Manager Baud rate can be changed Baud rate fixed at 9600 Timestamp is an MCU counter of at least 1kHz (1000us) Timestamp is GPIO falling-edge with 1us hold-time Measures throughput (infernces per second) and accuracy (Top-1 and AUC) Measures throughput (inferences per second) and Energy (Joules per inference) Because of these key differences, two different plug-ins are provided in the "Benchmarks and Test Scripts" drop-down, one for each of the two modes. It not possible to switch modes dynamically because some UARTs cannot change baud on the fly. Future versions of the benchmark will support alternate solutions for changing modes, but for now it requires a recompilation of the firmware and use of a define EE_CFG_ENERGY_MODE 1 to switch. More details on the Performance and Energy mode can be found at Reference platforms used 1. DUT: STM32 Nucleo L4RZI-P 2. Energy monitor: STM32 Nucleo LPM01A 3. IO Manager for UART: Arduino UNO Refer: for details There are also two videos to accompany the energy measurement process. Part one explains how to connect the hardware described in this document. Part two explains how to use the runner to make an energy measurement. Note: The reference working platform available in the web is STM32 Nucleo L4RZI, however due to unavailability of this device, we procured Nucleo L4RZI-P and got it working. The SDK for these two platforms is different as pin layout and peripheral differ. Datasets and Ground truth file The Datasets and the ground truth file can be downloaded from Synaptics AIML sharepoint Energy Runner Framework The runner binary is sponsored by MLCommons and is available to anyone after a click-through agreement. It can be found on the EEMBC website. Current version: 3.0.10 MLPerf Tiny Integration in SR110 Recommendation is to get MLPerf Tiny working in reference platform mentioned above to get a full end to end understand before we following the below steps. There are sufficient documentations to refer - https://wiki.int.embedur.com/doc/nucleo-l4r5zi-p-dMHndgCXg3 SR110 has two UART interfaces, UART0 for CMD/API interface and UART1 for Logging. To communicate with the Energy Runner Framework, SHIM Layer needs to be implemented in the DUT FW. The communication is through the UART interface and SR110 CMD/API interface should be configured via UART0. SR110 SHIM Layer implementation 1. SHIM Layer primarily has two CPP files, internally_implemented.cpp and submitter_implemented.cpp. 2. submitter_implemented.cpp file has common library function implementations for initialization, UART0 tx (printf), timestamp print etc. 3. submitter_implemneted.cpp file has functions to parse the UART RX (with % as the delimiter) and backend function implemenation to process each command. It will have the APIs to load tensors, run inference using TFLite library. 4. Implementation in SR110 1. http://gitlab.embedur.local/product/synaptics_sr110/-/tree/mlperf_tiny/common/applications/sample_applications/mlperf_tiny/mlperf_common_libs_src 2. Benchmark model integration Once DUT/SR110 is able to communicate with the Runner FW, Integrate the following models in SR110. 1. Image classification 2. Keyword spotting 3. Visual wake words 4. Anomaly detection The TFLite files can be found at Compile the TFLite model using Vela compiler (Refer steps at ) and integrate the model compilation cc file and hook up with the submitter_implementation.cc file for interfacing with the TFLite libraries to the following operations 1. Create TFLM instance 2. Load the model into memory 3. Preprocess the input data (quantization) and load it into input tensor 4. Run Inference 5. Post process the output tensor (de-quantization) and print the predictions SR110 implementation for the above 4 benchmark models can be found at Running Performance Mode Highlevel flow / sequence of steps 1. Bring up the DUT setup 2. Start Runner framework, the detect the DUT COM PORT. 3. Click Initialize button to detect the Model under test 4. There are 3 test modes 1. Setup: We shall provide number of warm up iterations, inference iterations and input file. Warm iterations are required to do any initialization (e.g. loading model into memory) before we measure the actual inference. The input file should be from the dataset directory 2. Median Performance: This option performs inference on 5 different input files (using the number of inference iterations in the options above) and reports the median "inferences per second" measurement. 3. Accuracy: This option will run a single inference on all of the input files in the datasets folder and generate the Top-1 and AUC values. Testing Results MLPerf Tiny Benchmark model Throughput/Latency TOP-1 AUC (Area under the curve) Image classification 974 inf./sec 87.5% 0.98 Keyword spotting 2024 inf./sec 90.1% 0.99 Visual wake words 1868 inf./sec 84.5% 0.92 Anomaly detection 1120 inf./sec 77.8% 0.85 References 1. https://github.com/eembc/energyrunner/blob/main/README.md 2. 3. 4.
###Human:tell me about MLPerf Tiny ###assistant: Understanding MLPerf benchmark framework Detailed information is provided in the following PDF document MLPerf_Tiny_Benchmark_2106.07597v4.pdf 908757 Quick summary is to integrate the following benchmark models and calculate the metrics mentioned below. Metrics: 1. Performance mode 1. Latency (time taken per inference) or Throughput (number of inferences per second) 2. Top-1 (Accuracy) 3. AUC 1. The AUC metric calculates the area under the ROC curve. A larger area under the curve signifies better overall ability to separate the positive and negative classes. 2. AUC metric is a powerful tool for assessing the performance of binary classification models. It offers a more informative measure of a model's ability to distinguish between positive and negative cases compared to just accuracy. 2. Energy Mode 1. Energy consumed per inference Performance Mode Energy Mode Connects to Host PC Electrically isolated from Host PC Talks directly to the Runner Talks directly to IO Manager Baud rate can be changed Baud rate fixed at 9600 Timestamp is an MCU counter of at least 1kHz (1000us) Timestamp is GPIO falling-edge with 1us hold-time Measures throughput (infernces per second) and accuracy (Top-1 and AUC) Measures throughput (inferences per second) and Energy (Joules per inference) Because of these key differences, two different plug-ins are provided in the "Benchmarks and Test Scripts" drop-down, one for each of the two modes. It not possible to switch modes dynamically because some UARTs cannot change baud on the fly. Future versions of the benchmark will support alternate solutions for changing modes, but for now it requires a recompilation of the firmware and use of a define EE_CFG_ENERGY_MODE 1 to switch. More details on the Performance and Energy mode can be found at Reference platforms used 1. DUT: STM32 Nucleo L4RZI-P 2. Energy monitor: STM32 Nucleo LPM01A 3. IO Manager for UART: Arduino UNO Refer: for details There are also two videos to accompany the energy measurement process. Part one explains how to connect the hardware described in this document. Part two explains how to use the runner to make an energy measurement. Note: The reference working platform available in the web is STM32 Nucleo L4RZI, however due to unavailability of this device, we procured Nucleo L4RZI-P and got it working. The SDK for these two platforms is different as pin layout and peripheral differ. Datasets and Ground truth file The Datasets and the ground truth file can be downloaded from Synaptics AIML sharepoint Energy Runner Framework The runner binary is sponsored by MLCommons and is available to anyone after a click-through agreement. It can be found on the EEMBC website. Current version: 3.0.10 MLPerf Tiny Integration in SR110 Recommendation is to get MLPerf Tiny working in reference platform mentioned above to get a full end to end understand before we following the below steps. There are sufficient documentations to refer - https://wiki.int.embedur.com/doc/nucleo-l4r5zi-p-dMHndgCXg3 SR110 has two UART interfaces, UART0 for CMD/API interface and UART1 for Logging. To communicate with the Energy Runner Framework, SHIM Layer needs to be implemented in the DUT FW. The communication is through the UART interface and SR110 CMD/API interface should be configured via UART0. SR110 SHIM Layer implementation 1. SHIM Layer primarily has two CPP files, internally_implemented.cpp and submitter_implemented.cpp. 2. submitter_implemented.cpp file has common library function implementations for initialization, UART0 tx (printf), timestamp print etc. 3. submitter_implemneted.cpp file has functions to parse the UART RX (with % as the delimiter) and backend function implemenation to process each command. It will have the APIs to load tensors, run inference using TFLite library. 4. Implementation in SR110 1. http://gitlab.embedur.local/product/synaptics_sr110/-/tree/mlperf_tiny/common/applications/sample_applications/mlperf_tiny/mlperf_common_libs_src 2. Benchmark model integration Once DUT/SR110 is able to communicate with the Runner FW, Integrate the following models in SR110. 1. Image classification 2. Keyword spotting 3. Visual wake words 4. Anomaly detection The TFLite files can be found at Compile the TFLite model using Vela compiler (Refer steps at ) and integrate the model compilation cc file and hook up with the submitter_implementation.cc file for interfacing with the TFLite libraries to the following operations 1. Create TFLM instance 2. Load the model into memory 3. Preprocess the input data (quantization) and load it into input tensor 4. Run Inference 5. Post process the output tensor (de-quantization) and print the predictions SR110 implementation for the above 4 benchmark models can be found at Running Performance Mode Highlevel flow / sequence of steps 1. Bring up the DUT setup 2. Start Runner framework, the detect the DUT COM PORT. 3. Click Initialize button to detect the Model under test 4. There are 3 test modes 1. Setup: We shall provide number of warm up iterations, inference iterations and input file. Warm iterations are required to do any initialization (e.g. loading model into memory) before we measure the actual inference. The input file should be from the dataset directory 2. Median Performance: This option performs inference on 5 different input files (using the number of inference iterations in the options above) and reports the median "inferences per second" measurement. 3. Accuracy: This option will run a single inference on all of the input files in the datasets folder and generate the Top-1 and AUC values. Testing Results MLPerf Tiny Benchmark model Throughput/Latency TOP-1 AUC (Area under the curve) Image classification 974 inf./sec 87.5% 0.98 Keyword spotting 2024 inf./sec 90.1% 0.99 Visual wake words 1868 inf./sec 84.5% 0.92 Anomaly detection 1120 inf./sec 77.8% 0.85 References 1. https://github.com/eembc/energyrunner/blob/main/README.md 2. 3. 4.
Weekly Status Report - 16th May
Executive Summary: Model Zoo: Worked on designing the UX for the landing page and the further enchancements with the UI. Worked on including support for tflite models in YOLO and PyTorch models in the CLI tool. Added support for video predictions and live video support for the YOLO model detection in the CLI tool. AI Explorations: Driver Distraction Detection: Included other forms of distraction detection as well using a neural network based model. Explored on other classes that can be added to distraction detection as well. Speech Recognition: Worked on WAV2Vec model to get the transcript from an audio file. Worked on triggering an action based on a command in Arduino Nano BLE. CSI: Explored and tried out on the CatBoost model to check the accuracy of the inferences. Collected test data from other rooms and working on testing and comparing the models trained. SWin Transformers: Explored on the basic use case of transformers and its application in language models. Working on understanding the usecases in image classification and working on a simple model for the same. Accomplishments: Dev: AI-243: Planning for the services of first phase – ==Closed.== AI-343: Adapt Inference Model Comparison – ==Closed.== AI-245: Device firmware planning / design – ==Closed.== AIR-3674: Run ML flow for 5.8 – ==Closed.== AI-334 : Feature enhancements - Prediction and Validation – ==Closed.== AI-269: Profiling and Comparing metrics , Registering a model – ==In Progress.== AI-336: Explore on Transformers and Swin Transformers - ==In Progress.== AI-337: Inclusion of Level 0 details of the collected models in the DB - ==In Progress.== AI-341: ML - Perf Report Generation – ==In Progress.== AI-328: Model Zoo site with landing page functionalities – ==In Progress.== AI-342:Speech recognition - Arduino Nano BLE – ==In Progress.== AI-326: Speech Recognition - Wake Up word based use case – ==In Progress.== AI-340: Retraining of YOLACT model for 480x480 image size – ==In Progress.== AI-339: CSI data collection and model training and testing – ==In Progress.== AI-324: Driver Distraction Detection system - Identify model and test on RPI – ==In Progress.== AI-333: Feature enhancements - Model Upload – ==In Progress.== DevOps: AI-337: CI/CD Implementation for UI – ==Closed.== AI-338: CI/CD Implementation for backend – ==Closed.== AI-306: MLflow and DVC Integration – ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Additional changes to UX and implementation of the same in the UI. Inclusion of About Us and Contact Us pages. Feature enchancements to the CLI tool. Inclusion of Level 0 details of the models curated into the DB. AI Explorations: Driver Distraction Detection: Improve the model accuracy and add additional classes as required. Test the system on Raspberry PI and collect the metrics. Speech Recognition: Work on real time speech recognition using ASR. Explore on the speech recognition usecases in ESP32. CSI: Explore on other deep learning models for CSI based human detection. Transformers: Explore on transformers and Swin Transformers. Look into demo on Swin Transformers for image classification. PTO and Vacation: Anees: 13th May Jayaprakash: 16th May Shaik: 13th May (Voting) , 16th May DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-337: CI/CD Implementation for UI – ==Closed.== Created a gitlab-ci for Front end implementation. Added build stage and deploy stage . Pipeline will be triggered whenever code is merged to master branch. Used GPU server as the runner. [JENANI SAMPATH] AI-338: CI/CD Implementation for backend – ==Closed.== Created the gitlab-ci for backend services. Added build and deploy stage. Pipeline will be triggered whenever code is merged to develop branch. [SURAJRAM SRINIVASAN] AI-243: Planning for the services of first phase – ==Closed.== Planned and identified the items and the services for first cut of deployment. Working on implementing the UI/UX and the CLI tool for the same. [ANEES ASLAM] AI-343: Adapt Inference Model Comparison – ==Closed.== Completed integrating TFlite_utils.py for increased resolution. Tested multiple inferences to compare performance and results across different models. The metrics are posted in Synaptics AIML group for reference. [ANEES ASLAM] AI-245: Device firmware planning / design – ==Closed.== The Initial Design and Plan are completed and verified across Team. The Overall architecture and targets are updated in wiki. [JAYAPRAKASH MASILAMANI] AIR-3674: Run ML flow for 5.8 – ==Closed.== Added the testcases with respect to the recent Fingerprint 5.8 model update changes, which are prediction of model using IElist hex value and increase in model accuracy based on hostname/brand. Updated the test script for the inclusion of common ielist in the input parameter for the inference. Ran the script and sucessfully logged the model accuracy metrics and test results in MLflow. [SHANMUGA PRIYA RAVISHANKAR] AI-333: Feature enhancements - Model Upload – ==Closed.== File already exists check added. Recorded video prediction done and tested. Live video prediction, saving the results and downloading the output to local system is done. [SANJAY RAVI] AI-334: Feature enhancements - Prediction and Validation - ==Closed.== Implemented Dynamic remote path for deploy model Dataset Changes (seg, pose, classify and detect ) are implemented Pytorch support implemented for (predict and validations) Listing all available pytorch models . JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-306: MLflow and DVC Integration – ==In Progress.== Added custom dataset for training a pretrained model. Trained the yolov8n pre-trained model with vehicle dataset and hyper-parameter tuning, added the inference to mlflow, pulled the data from DVC. Performed validation and prediction as well and uploaded the metrics to mlflow. [ETA - 20/05 ] [SURAJRAM SRINIVASAN] AI-269: Profiling and Comparing metrics , Registering a model – ==In Progress.== Completed the code for the model profiling and comparing metrics. Added code for registering a model using MLFlow APIs Working on testing the model registration and its use in model lifecycle part. [ETA - 20/05 ] [SURAJRAM SRINIVASAN] AI-336: Explore on Transformers and Swin Transformers - ==In Progress.== Started exploration on the transformers and its use in vision and text domains. Exploring on the use and the usecases for Swin Transformers. Working on bringing a simple image classification model using Swin Transformers [ETA - 23/05] [SURAJRAM SRINIVASAN] AI-337: Inclusion of Level 0 details of the collected models in the DB - ==In Progress.== With the curated list on models and the details working on loading the details in the DB . Working on Level 0 details in the DB from the curated list. [ETA - 22/05] [ANEES ASLAM] AI-341: ML - Perf Report Generation – ==In Progress.== Completed: Notebook containing End2End Flow for Inference code and results submission Tested for Image Classification, with Resnet, SSD, Mobile-NET. Issue: "mlperf-loadgen" module is Needed for Capturing logs (Depreciated). Necessary Guides have request only access. Fixes: Access has been requested from MLCommons. Generate Logs via external modules Next Steps: Get Started and register EmbedUR for Cohorts offered by MLCommons. They provide regular meet and collaboration. [KEERTHANA SUNDARBABU] AI-328: Model Zoo site with landing page functionalities – ==In Progress.== Implemented responsive design for the landing page across different devices. Added the flowchart design to the homepage for model life cycle. [ETA - 22/05 ] [KEERTHANA SUNDARBABU] AI-342:Speech recognition - Arduino Nano BLE – ==In Progress.== Installed Harvard tinyml library on the Arduino nano BLE 33 Ran Compiled and Deployed the micro-speech example on the Arduino Nano BLE device Made minor changes on the functionality to trigger a function based on whether the recognized word is "yes" or "no". A simple function that prints "Your command is Yes/No" based on the command recognized on the serial console. [ETA - 17/05] [JAYAPRAKASH MASILAMANI] AI-326: Speech Recognition - Wake Up word based use case – ==In Progress.== Implemented the Speech recognition using Pytorch WAV2VEC2_ASR_BASE_960H model where audio files without background noise are transcribed with almost perfect accuracy. Working on the implementation of Real time ASR using the Pytorch WAV2VEC2_ASR_BASE_960H model and Greedy CTC decoder. [ETA - 17/05 ] [JAYAPRAKASH MASILAMANI] AI-340: Retraining of YOLACT model for 480x480 image size – ==In Progress.== Created a virtual environment which has below versions for torch 1.11.0+cu113 and torchvision 0.12.0+cu113 running with python 3.10.12. It does not support torch 1.4.0 Added a custom configuration for 480x480 imgsz in config.py file Trained the yolact model with this custom configuration for image size 480 and ran into an issue. Downgraded the python to 3.7 and created a Virtual env which has below versions of torch 1.4.0+cu92 and torchvision 0.5.0+cu92 Trained the custom onfiguration for 480x480 imgsz in config.py file and ran into a cuda error. [ETA - 17/05 ] [SWATHI RAJAMANI] AI-328: Model Zoo Site with landing page functionalities – ==In Progress.== Working on landing page enhancements Added animations, auto scrolls [ETA - 30/05 ] [KAUSHIK VISHAL SARAVANAN] AI-339: CSI data collection and model training and testingn – ==In Progress.== Working on collecting data from different rooms on top of already collected data. Working on testing the binary classification model with the collected data. Created a catboost model and trained it. Working on testing the catboost model with the collected data. [ETA - 17/05] [SHAIK SHUAIB AFNAN] AI-324: Driver Distraction Detection system - Identify model and test on RPI – ==In Progress.== Identified EfficeintNetB7 for model training and ran it for 5 epochs on gpu server and saved the model into a tensorflow saved model which is .pb format. Used static images and static video as input for prediction. Acheived an accuracy of 82% on 5 epochs and will be training the model again for 10 epochs and also convert the model into tflite and int8 quantized tflite models. Also looking for more lightweight models and which will perform well and provide high accuracies being lightweight. Documented the efficientNetb7 model architecture in wiki page along with how with works and other things. [ETA - 22/05] [SHANMUGA PRIYA RAVISHANKAR] AI-333: Feature enhancements - Model Upload – ==In Progress.== Live video streaming to local system is pending. [ETA - 21/05 ]**
###Human:tell me about Weekly Status Report - 16th May ###assistant: Executive Summary: Model Zoo: Worked on designing the UX for the landing page and the further enchancements with the UI. Worked on including support for tflite models in YOLO and PyTorch models in the CLI tool. Added support for video predictions and live video support for the YOLO model detection in the CLI tool. AI Explorations: Driver Distraction Detection: Included other forms of distraction detection as well using a neural network based model. Explored on other classes that can be added to distraction detection as well. Speech Recognition: Worked on WAV2Vec model to get the transcript from an audio file. Worked on triggering an action based on a command in Arduino Nano BLE. CSI: Explored and tried out on the CatBoost model to check the accuracy of the inferences. Collected test data from other rooms and working on testing and comparing the models trained. SWin Transformers: Explored on the basic use case of transformers and its application in language models. Working on understanding the usecases in image classification and working on a simple model for the same. Accomplishments: Dev: AI-243: Planning for the services of first phase – ==Closed.== AI-343: Adapt Inference Model Comparison – ==Closed.== AI-245: Device firmware planning / design – ==Closed.== AIR-3674: Run ML flow for 5.8 – ==Closed.== AI-334 : Feature enhancements - Prediction and Validation – ==Closed.== AI-269: Profiling and Comparing metrics , Registering a model – ==In Progress.== AI-336: Explore on Transformers and Swin Transformers - ==In Progress.== AI-337: Inclusion of Level 0 details of the collected models in the DB - ==In Progress.== AI-341: ML - Perf Report Generation – ==In Progress.== AI-328: Model Zoo site with landing page functionalities – ==In Progress.== AI-342:Speech recognition - Arduino Nano BLE – ==In Progress.== AI-326: Speech Recognition - Wake Up word based use case – ==In Progress.== AI-340: Retraining of YOLACT model for 480x480 image size – ==In Progress.== AI-339: CSI data collection and model training and testing – ==In Progress.== AI-324: Driver Distraction Detection system - Identify model and test on RPI – ==In Progress.== AI-333: Feature enhancements - Model Upload – ==In Progress.== DevOps: AI-337: CI/CD Implementation for UI – ==Closed.== AI-338: CI/CD Implementation for backend – ==Closed.== AI-306: MLflow and DVC Integration – ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Additional changes to UX and implementation of the same in the UI. Inclusion of About Us and Contact Us pages. Feature enchancements to the CLI tool. Inclusion of Level 0 details of the models curated into the DB. AI Explorations: Driver Distraction Detection: Improve the model accuracy and add additional classes as required. Test the system on Raspberry PI and collect the metrics. Speech Recognition: Work on real time speech recognition using ASR. Explore on the speech recognition usecases in ESP32. CSI: Explore on other deep learning models for CSI based human detection. Transformers: Explore on transformers and Swin Transformers. Look into demo on Swin Transformers for image classification. PTO and Vacation: Anees: 13th May Jayaprakash: 16th May Shaik: 13th May (Voting) , 16th May DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-337: CI/CD Implementation for UI – ==Closed.== Created a gitlab-ci for Front end implementation. Added build stage and deploy stage . Pipeline will be triggered whenever code is merged to master branch. Used GPU server as the runner. [JENANI SAMPATH] AI-338: CI/CD Implementation for backend – ==Closed.== Created the gitlab-ci for backend services. Added build and deploy stage. Pipeline will be triggered whenever code is merged to develop branch. [SURAJRAM SRINIVASAN] AI-243: Planning for the services of first phase – ==Closed.== Planned and identified the items and the services for first cut of deployment. Working on implementing the UI/UX and the CLI tool for the same. [ANEES ASLAM] AI-343: Adapt Inference Model Comparison – ==Closed.== Completed integrating TFlite_utils.py for increased resolution. Tested multiple inferences to compare performance and results across different models. The metrics are posted in Synaptics AIML group for reference. [ANEES ASLAM] AI-245: Device firmware planning / design – ==Closed.== The Initial Design and Plan are completed and verified across Team. The Overall architecture and targets are updated in wiki. [JAYAPRAKASH MASILAMANI] AIR-3674: Run ML flow for 5.8 – ==Closed.== Added the testcases with respect to the recent Fingerprint 5.8 model update changes, which are prediction of model using IElist hex value and increase in model accuracy based on hostname/brand. Updated the test script for the inclusion of common ielist in the input parameter for the inference. Ran the script and sucessfully logged the model accuracy metrics and test results in MLflow. [SHANMUGA PRIYA RAVISHANKAR] AI-333: Feature enhancements - Model Upload – ==Closed.== File already exists check added. Recorded video prediction done and tested. Live video prediction, saving the results and downloading the output to local system is done. [SANJAY RAVI] AI-334: Feature enhancements - Prediction and Validation - ==Closed.== Implemented Dynamic remote path for deploy model Dataset Changes (seg, pose, classify and detect ) are implemented Pytorch support implemented for (predict and validations) Listing all available pytorch models . JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-306: MLflow and DVC Integration – ==In Progress.== Added custom dataset for training a pretrained model. Trained the yolov8n pre-trained model with vehicle dataset and hyper-parameter tuning, added the inference to mlflow, pulled the data from DVC. Performed validation and prediction as well and uploaded the metrics to mlflow. [ETA - 20/05 ] [SURAJRAM SRINIVASAN] AI-269: Profiling and Comparing metrics , Registering a model – ==In Progress.== Completed the code for the model profiling and comparing metrics. Added code for registering a model using MLFlow APIs Working on testing the model registration and its use in model lifecycle part. [ETA - 20/05 ] [SURAJRAM SRINIVASAN] AI-336: Explore on Transformers and Swin Transformers - ==In Progress.== Started exploration on the transformers and its use in vision and text domains. Exploring on the use and the usecases for Swin Transformers. Working on bringing a simple image classification model using Swin Transformers [ETA - 23/05] [SURAJRAM SRINIVASAN] AI-337: Inclusion of Level 0 details of the collected models in the DB - ==In Progress.== With the curated list on models and the details working on loading the details in the DB . Working on Level 0 details in the DB from the curated list. [ETA - 22/05] [ANEES ASLAM] AI-341: ML - Perf Report Generation – ==In Progress.== Completed: Notebook containing End2End Flow for Inference code and results submission Tested for Image Classification, with Resnet, SSD, Mobile-NET. Issue: "mlperf-loadgen" module is Needed for Capturing logs (Depreciated). Necessary Guides have request only access. Fixes: Access has been requested from MLCommons. Generate Logs via external modules Next Steps: Get Started and register EmbedUR for Cohorts offered by MLCommons. They provide regular meet and collaboration. [KEERTHANA SUNDARBABU] AI-328: Model Zoo site with landing page functionalities – ==In Progress.== Implemented responsive design for the landing page across different devices. Added the flowchart design to the homepage for model life cycle. [ETA - 22/05 ] [KEERTHANA SUNDARBABU] AI-342:Speech recognition - Arduino Nano BLE – ==In Progress.== Installed Harvard tinyml library on the Arduino nano BLE 33 Ran Compiled and Deployed the micro-speech example on the Arduino Nano BLE device Made minor changes on the functionality to trigger a function based on whether the recognized word is "yes" or "no". A simple function that prints "Your command is Yes/No" based on the command recognized on the serial console. [ETA - 17/05] [JAYAPRAKASH MASILAMANI] AI-326: Speech Recognition - Wake Up word based use case – ==In Progress.== Implemented the Speech recognition using Pytorch WAV2VEC2_ASR_BASE_960H model where audio files without background noise are transcribed with almost perfect accuracy. Working on the implementation of Real time ASR using the Pytorch WAV2VEC2_ASR_BASE_960H model and Greedy CTC decoder. [ETA - 17/05 ] [JAYAPRAKASH MASILAMANI] AI-340: Retraining of YOLACT model for 480x480 image size – ==In Progress.== Created a virtual environment which has below versions for torch 1.11.0+cu113 and torchvision 0.12.0+cu113 running with python 3.10.12. It does not support torch 1.4.0 Added a custom configuration for 480x480 imgsz in config.py file Trained the yolact model with this custom configuration for image size 480 and ran into an issue. Downgraded the python to 3.7 and created a Virtual env which has below versions of torch 1.4.0+cu92 and torchvision 0.5.0+cu92 Trained the custom onfiguration for 480x480 imgsz in config.py file and ran into a cuda error. [ETA - 17/05 ] [SWATHI RAJAMANI] AI-328: Model Zoo Site with landing page functionalities – ==In Progress.== Working on landing page enhancements Added animations, auto scrolls [ETA - 30/05 ] [KAUSHIK VISHAL SARAVANAN] AI-339: CSI data collection and model training and testingn – ==In Progress.== Working on collecting data from different rooms on top of already collected data. Working on testing the binary classification model with the collected data. Created a catboost model and trained it. Working on testing the catboost model with the collected data. [ETA - 17/05] [SHAIK SHUAIB AFNAN] AI-324: Driver Distraction Detection system - Identify model and test on RPI – ==In Progress.== Identified EfficeintNetB7 for model training and ran it for 5 epochs on gpu server and saved the model into a tensorflow saved model which is .pb format. Used static images and static video as input for prediction. Acheived an accuracy of 82% on 5 epochs and will be training the model again for 10 epochs and also convert the model into tflite and int8 quantized tflite models. Also looking for more lightweight models and which will perform well and provide high accuracies being lightweight. Documented the efficientNetb7 model architecture in wiki page along with how with works and other things. [ETA - 22/05] [SHANMUGA PRIYA RAVISHANKAR] AI-333: Feature enhancements - Model Upload – ==In Progress.== Live video streaming to local system is pending. [ETA - 21/05 ]**
AI Deployment Infrastructure
AI Lifecycle (Model Nova) Deployment Flow Step 1: Detect Port/Flash: The deployment process begins with detecting the port of Target device using UART / Virtual Ports (ex: Arduino nano 33 BLE) for communication. Flashing the device involves transferring the firmware and necessary configurations to the device for execution. For Private Devices, there is need to simulate software to get acceptance. ex: Simulate SyNa toolkit for SR110 / VS680 !Init Process Step 2: Data Acquisition: List all Sensors such as cameras and microphones capture data from the environment. Sensor drivers facilitate the communication between sensors and the edge device, ensuring seamless data acquisition. Based on Domain Selection, these data are validated before Data Transformation.nData are shared as JSON, across AI Server and Client CLI !CLI for Data Acquisition Step 3: Model Selection: Developers utilize SDKs or CLI to trigger APIs for selecting appropriate machine learning models for deployment, guided by Domain and Data Analysis. Models are chosen based on the specific domain and inference requirements of the application. Upon Validation, model quantization performed: Technique Data requirements Size reduction Accuracy Supported hardware Post-training float16 quantization No data Up to 50% Insignificant accuracy loss CPU, GPU Post-training dynamic range quantization No data Up to 75% Smallest accuracy loss CPU, GPU (Android) Post-training integer quantization Unlabeled representative sample Up to 75% Small accuracy loss CPU, GPU (Android), EdgeTPU, Hexagon DSP Quantization-aware training Labelled training data Up to 75% Smallest accuracy loss CPU, GPU (Android), EdgeTPU, Hexagon DS Step 4: Deployment: The selected model is updated, integrated including all necessary drivers, headers, inference as combined binary, which will be deployed onto (Arduino nano 33 BLE) device. Deployment involves transferring the model files (e.g., .bin format) to the device's storage for execution on boot-up. !Deployment Infrastructure
###Human:tell me about AI Deployment Infrastructure ###assistant: AI Lifecycle (Model Nova) Deployment Flow Step 1: Detect Port/Flash: The deployment process begins with detecting the port of Target device using UART / Virtual Ports (ex: Arduino nano 33 BLE) for communication. Flashing the device involves transferring the firmware and necessary configurations to the device for execution. For Private Devices, there is need to simulate software to get acceptance. ex: Simulate SyNa toolkit for SR110 / VS680 !Init Process Step 2: Data Acquisition: List all Sensors such as cameras and microphones capture data from the environment. Sensor drivers facilitate the communication between sensors and the edge device, ensuring seamless data acquisition. Based on Domain Selection, these data are validated before Data Transformation.nData are shared as JSON, across AI Server and Client CLI !CLI for Data Acquisition Step 3: Model Selection: Developers utilize SDKs or CLI to trigger APIs for selecting appropriate machine learning models for deployment, guided by Domain and Data Analysis. Models are chosen based on the specific domain and inference requirements of the application. Upon Validation, model quantization performed: Technique Data requirements Size reduction Accuracy Supported hardware Post-training float16 quantization No data Up to 50% Insignificant accuracy loss CPU, GPU Post-training dynamic range quantization No data Up to 75% Smallest accuracy loss CPU, GPU (Android) Post-training integer quantization Unlabeled representative sample Up to 75% Small accuracy loss CPU, GPU (Android), EdgeTPU, Hexagon DSP Quantization-aware training Labelled training data Up to 75% Smallest accuracy loss CPU, GPU (Android), EdgeTPU, Hexagon DS Step 4: Deployment: The selected model is updated, integrated including all necessary drivers, headers, inference as combined binary, which will be deployed onto (Arduino nano 33 BLE) device. Deployment involves transferring the model files (e.g., .bin format) to the device's storage for execution on boot-up. !Deployment Infrastructure
Tiny Flash
Arduino Nano 33 BLE (Source) Microcontroller and Memory: The Arduino Nano 33 BLE is powered by the Nordic nRF52840 microcontroller, which features an ARM Cortex-M4 CPU running at 64 MHz. It has 1MB of flash memory and 256KB of RAM. Communication Interfaces: The Nano 33 BLE supports Bluetooth Low Energy (BLE), Universal Asynchronous Receiver-Transmitter (UART), Inter-Integrated Circuit (I2C), and Serial Peripheral Interface (SPI) communication protocols. 2. Design Firmware Architecture Memory Allocation: Allocate a portion of the flash memory to store AI models. Consider the size of the models and the available flash memory to determine the appropriate allocation. API Design: Design APIs for communication, such as functions for initiating flashing, transferring model files, and verifying the integrity of the flashed models. Utilize BLE characteristics or UART commands for communication. 3. Implement System API Calls Using Arduino Libraries: Leverage Arduino libraries to interact with the Nano 33 BLE's hardware and firmware. Utilize libraries for BLE communication, flash memory access, and other necessary functionalities. Flash Memory Access: Use functions provided by Arduino libraries to read from and write to the flash memory. Ensure data integrity by implementing checksum verification mechanisms. To develop the flashing mechanism for the Arduino Nano 33 BLE, you'll need to implement protocols and procedures for transferring AI model files onto the device and writing them to the flash memory. Here's a detailed approach: 4. Develop Flashing Mechanism BLE File Transfer Protocol: 1. Chunking Data: Break down AI model files into smaller chunks suitable for transmission over BLE. Define a packet structure that includes a header specifying the packet's type, size, and sequence number, followed by the data payload. 2. Transfer Procedure: Implement a procedure for transferring these chunks over BLE from a host device (e.g., smartphone, computer) to the Nano 33 BLE. Utilize BLE characteristics for sending and receiving data packets. 3. Acknowledgment Mechanism: Implement an acknowledgment mechanism to ensure reliable data transmission. After receiving each chunk, the Nano 33 BLE sends an acknowledgment back to the host device, confirming successful reception. Flashing Procedure: 1. Receive and Store Chunks: Upon receiving each chunk of data, store it temporarily in the Nano 33 BLE's RAM. Verify the integrity of each chunk using checksums or other error-detection mechanisms. 2. Write to Flash Memory: Once all chunks are successfully received and stored, write them sequentially to the allocated portion of the flash memory. Use Arduino library functions to access and write data to flash memory addresses. 3. Verify Integrity: After writing the data, perform a verification step to ensure that the flashed model matches the original file. Compare checksums or hash values computed from the flashed data and the original model file. Error Handling and Recovery: 1. Retry Mechanism: Implement a retry mechanism for failed data transmissions. If a packet is not successfully received or acknowledged, retry sending it a certain number of times before aborting the flashing process. 2. Rollback Mechanism: In case of errors during flashing, implement a rollback mechanism to revert the flash memory to its previous state. This ensures that partially flashed models do not corrupt the device's memory. 3. User Feedback: Provide feedback to the user during the flashing process, indicating the progress, success, or failure of each step. Display error messages or warnings when necessary. Example Pseudocode: 5. Optimize for Size and Efficiency Code Optimization: Optimize the firmware code to minimize size and improve efficiency. Use compiler optimizations and avoid unnecessary libraries or functionalities. Data Compression: Consider compressing model data to reduce the size of transferred files. Utilize compression algorithms suitable for embedded systems.
###Human:tell me about Tiny Flash ###assistant: Arduino Nano 33 BLE (Source) Microcontroller and Memory: The Arduino Nano 33 BLE is powered by the Nordic nRF52840 microcontroller, which features an ARM Cortex-M4 CPU running at 64 MHz. It has 1MB of flash memory and 256KB of RAM. Communication Interfaces: The Nano 33 BLE supports Bluetooth Low Energy (BLE), Universal Asynchronous Receiver-Transmitter (UART), Inter-Integrated Circuit (I2C), and Serial Peripheral Interface (SPI) communication protocols. 2. Design Firmware Architecture Memory Allocation: Allocate a portion of the flash memory to store AI models. Consider the size of the models and the available flash memory to determine the appropriate allocation. API Design: Design APIs for communication, such as functions for initiating flashing, transferring model files, and verifying the integrity of the flashed models. Utilize BLE characteristics or UART commands for communication. 3. Implement System API Calls Using Arduino Libraries: Leverage Arduino libraries to interact with the Nano 33 BLE's hardware and firmware. Utilize libraries for BLE communication, flash memory access, and other necessary functionalities. Flash Memory Access: Use functions provided by Arduino libraries to read from and write to the flash memory. Ensure data integrity by implementing checksum verification mechanisms. To develop the flashing mechanism for the Arduino Nano 33 BLE, you'll need to implement protocols and procedures for transferring AI model files onto the device and writing them to the flash memory. Here's a detailed approach: 4. Develop Flashing Mechanism BLE File Transfer Protocol: 1. Chunking Data: Break down AI model files into smaller chunks suitable for transmission over BLE. Define a packet structure that includes a header specifying the packet's type, size, and sequence number, followed by the data payload. 2. Transfer Procedure: Implement a procedure for transferring these chunks over BLE from a host device (e.g., smartphone, computer) to the Nano 33 BLE. Utilize BLE characteristics for sending and receiving data packets. 3. Acknowledgment Mechanism: Implement an acknowledgment mechanism to ensure reliable data transmission. After receiving each chunk, the Nano 33 BLE sends an acknowledgment back to the host device, confirming successful reception. Flashing Procedure: 1. Receive and Store Chunks: Upon receiving each chunk of data, store it temporarily in the Nano 33 BLE's RAM. Verify the integrity of each chunk using checksums or other error-detection mechanisms. 2. Write to Flash Memory: Once all chunks are successfully received and stored, write them sequentially to the allocated portion of the flash memory. Use Arduino library functions to access and write data to flash memory addresses. 3. Verify Integrity: After writing the data, perform a verification step to ensure that the flashed model matches the original file. Compare checksums or hash values computed from the flashed data and the original model file. Error Handling and Recovery: 1. Retry Mechanism: Implement a retry mechanism for failed data transmissions. If a packet is not successfully received or acknowledged, retry sending it a certain number of times before aborting the flashing process. 2. Rollback Mechanism: In case of errors during flashing, implement a rollback mechanism to revert the flash memory to its previous state. This ensures that partially flashed models do not corrupt the device's memory. 3. User Feedback: Provide feedback to the user during the flashing process, indicating the progress, success, or failure of each step. Display error messages or warnings when necessary. Example Pseudocode: 5. Optimize for Size and Efficiency Code Optimization: Optimize the firmware code to minimize size and improve efficiency. Use compiler optimizations and avoid unnecessary libraries or functionalities. Data Compression: Consider compressing model data to reduce the size of transferred files. Utilize compression algorithms suitable for embedded systems.
Examples
Tensor Flow micro experiments verified using Tiny Machine Learning Kit nfeaturing Arduino Nano 33 BLE Sense board. !TinyML Kit ✅ SPEECH Keyword Spotting (KWS) Microphone (in-built) ✅ VISION Person Detection Visual Wake Word (VWW) Omni Vision OV7675 Camera ✅ IMU Sensor Magic Wand - Identifies: ALPHANUMERIC with movement The LSM9DS1 is a system-in-package featuring a 3D digital linear acceleration sensor, a 3D digital angular rate sensor, and a 3D digital magnetic sensor.
###Human:tell me about Examples ###assistant: Tensor Flow micro experiments verified using Tiny Machine Learning Kit nfeaturing Arduino Nano 33 BLE Sense board. !TinyML Kit ✅ SPEECH Keyword Spotting (KWS) Microphone (in-built) ✅ VISION Person Detection Visual Wake Word (VWW) Omni Vision OV7675 Camera ✅ IMU Sensor Magic Wand - Identifies: ALPHANUMERIC with movement The LSM9DS1 is a system-in-package featuring a 3D digital linear acceleration sensor, a 3D digital angular rate sensor, and a 3D digital magnetic sensor.
Nano Speech
Application Structure Audio Preprocessor Audio Preprocessor model converts raw audio samples into a spectrographic feature. Audio samples are input to the model in windowed frames, each window overlapping the previous. The model doesn't take in raw audio sample data, instead it works with spectrograms which are two dimensional arrays that are made up of slices of frequency information, each taken from a different time window. The recipe for creating the spectrogram data is that each frequency slice is created by running an FFT across a 30ms window of the audio sample data. The input samples are treated as being between -1 and +1 as real values (encoded as -32,768 and 32,767 in 16-bit signed integer samples). The audio sampling window stride is 20ms, thus every window overlaps by 10ms. This results in an FFT with 257 entries. Every sequence of approximately six entries is averaged together, giving a total of 40 frequency buckets in the slice. The results are further processed by down-scaling, noise redu !1 sec Windowction, automatic gain control, and a final downscaling. Each adjacent frequency entry is stored in ascending memory order (frequency bucket 0 at data[0], bucket 1 at data[1], etc). The window for the frequency analysis is then moved forward by 20ms, and the process repeated, storing the results of the new frequency slice in the next memory row. The training is configured for raw audio samples of 1000ms in length. With a window size of 30ms and stride of 20ms, some 49 frequency slices can be created from 1000ms of audio data. Thus, the preprocessing produces a single channel image that is 40 pixels wide, and 49 rows high. When sufficient features have been accumulated, those features can be provided as input to speech model. These models expect the audio input 30ms window frame 20ms window stride 16KHz sample rate 16-bit signed PCM data single channel (mono) Model Architecture !YES Waveform This is a simple model comprised of a Convolutional 2D layer, a Fully Connected Layer or a MatMul Layer (output: logits) and a SoftMax layer (output: probabilities) as shown below. The output probabilities are in four categories: !NO Waveform 1. Silence 2. Unknown 3. YES 4. NO The input to the model is 49 spectrographic features, each feature consisting of 40 channels of data. The features are generated by the Audio Preprocessor model. Hardware !The MP34DT05 microphone sensor Results !YES !Undetected !NO
###Human:tell me about Nano Speech ###assistant: Application Structure Audio Preprocessor Audio Preprocessor model converts raw audio samples into a spectrographic feature. Audio samples are input to the model in windowed frames, each window overlapping the previous. The model doesn't take in raw audio sample data, instead it works with spectrograms which are two dimensional arrays that are made up of slices of frequency information, each taken from a different time window. The recipe for creating the spectrogram data is that each frequency slice is created by running an FFT across a 30ms window of the audio sample data. The input samples are treated as being between -1 and +1 as real values (encoded as -32,768 and 32,767 in 16-bit signed integer samples). The audio sampling window stride is 20ms, thus every window overlaps by 10ms. This results in an FFT with 257 entries. Every sequence of approximately six entries is averaged together, giving a total of 40 frequency buckets in the slice. The results are further processed by down-scaling, noise redu !1 sec Windowction, automatic gain control, and a final downscaling. Each adjacent frequency entry is stored in ascending memory order (frequency bucket 0 at data[0], bucket 1 at data[1], etc). The window for the frequency analysis is then moved forward by 20ms, and the process repeated, storing the results of the new frequency slice in the next memory row. The training is configured for raw audio samples of 1000ms in length. With a window size of 30ms and stride of 20ms, some 49 frequency slices can be created from 1000ms of audio data. Thus, the preprocessing produces a single channel image that is 40 pixels wide, and 49 rows high. When sufficient features have been accumulated, those features can be provided as input to speech model. These models expect the audio input 30ms window frame 20ms window stride 16KHz sample rate 16-bit signed PCM data single channel (mono) Model Architecture !YES Waveform This is a simple model comprised of a Convolutional 2D layer, a Fully Connected Layer or a MatMul Layer (output: logits) and a SoftMax layer (output: probabilities) as shown below. The output probabilities are in four categories: !NO Waveform 1. Silence 2. Unknown 3. YES 4. NO The input to the model is 49 spectrographic features, each feature consisting of 40 channels of data. The features are generated by the Audio Preprocessor model. Hardware !The MP34DT05 microphone sensor Results !YES !Undetected !NO
Transformers
Transformers are a type of neural network architecture that transforms or changes an input sequence into an output sequence. They do this by learning context and tracking relationships between sequence components. For example, consider this input sequence: "What is the color of the sky?" The transformer model uses an internal mathematical representation that identifies the relevancy and relationship between the words color, sky, and blue. It uses that knowledge to generate the output: "The sky is blue." Organizations use transformer models for all types of sequence conversions, from speech recognition to machine translation and protein sequence analysis. Transformers process long sequences in their entirety with parallel computation, which significantly decreases both training and processing times. This has enabled the training of very large language models (LLM), such as GPT and BERT, that can learn complex language representations. They have billions of parameters that capture a wide range of human language and knowledge, and they’re pushing research toward more generalizable AI systems. Transformer is an architecture for transforming one sequence into another one with the help of two parts (Encoder and Decoder), but it differs from the previously described/existing sequence-to-sequence models because it does not imply any Recurrent Networks (GRU, LSTM, etc.). Step By Step method of inferencing: Input the full encoder sequence (French sentence) and as decoder input, we take an empty sequence with only a start-of-sentence token on the first position. This will output a sequence where we will only take the first element. That element will be filled into second position of our decoder input sequence, which now has a start-of-sentence token and a first word/character in it. Input both the encoder sequence and the new decoder sequence into the model. Take the second element of the output and put it into the decoder input sequence. Repeat this until you predict an end-of-sentence token, which marks the end of the translation. Self Attention Mechanism: Transformer models modify this process by incorporating something called a self-attention mechanism. Instead of processing data in order, the mechanism enables the model to look at different parts of the sequence all at once and determine which parts are most important. Imagine that you're in a busy room and trying to listen to someone talk. Your brain automatically focuses on their voice while tuning out less important noises. Self-attention enables the model do something similar: it pays more attention to the relevant bits of information and combines them to make better output predictions. This mechanism makes transformers more efficient, enabling them to be trained on larger datasets. It’s also more effective, especially when dealing with long pieces of text where context from far back might influence the meaning of what's coming next. Blocks of a Transformer The Encoder is on the left and the Decoder is on the right. Both Encoder and Decoder are composed of modules that can be stacked on top of each other multiple times, which is described by Nx in the figure. We see that the modules consist mainly of Multi-Head Attention and Feed Forward layers. The inputs and outputs (target sentences) are first embedded into an n-dimensional space since we cannot use strings directly. One slight but important part of the model is the positional encoding of the different words. Since we have no recurrent networks that can remember how sequences are fed into a model, we need to somehow give every word/part in our sequence a relative position since a sequence depends on the order of its elements. These positions are added to the embedded representation (n-dimensional vector) of each word. Input embeddings This stage converts the input sequence into the mathematical domain that software algorithms understand. At first, the input sequence is broken down into a series of tokens or individual sequence components. For instance, if the input is a sentence, the tokens are words. Embedding then transforms the token sequence into a mathematical vector sequence. The vectors carry semantic and syntax information, represented as numbers, and their attributes are learned during the training process. You can visualize vectors as a series of coordinates in an n-dimensional space. As a simple example, think of a two-dimensional graph, where x represents the alphanumeric value of the first letter of the word and y represents their categories. The word banana has the value (2,2) because it starts with the letter b and is in the category fruit. The word mango has the value (13,2) because it starts with the letter m and is also in the category fruit. In this way, the vector (x,y) tells the neural network that the words banana and mango are in the same category. Now imagine an n-dimensional space with thousands of attributes about any word's grammar, meaning, and use in sentences mapped to a series of numbers. Software can use the numbers to calculate the relationships between words in mathematical terms and understand the human language model. Embeddings provide a way to represent discrete tokens as continuous vectors that the model can process and learn from. Positional encoding Positional encoding is a crucial component in the transformer architecture because the model itself doesn’t inherently process sequential data in order. The transformer needs a way to consider the order of the tokens in the input sequence. Positional encoding adds information to each token's embedding to indicate its position in the sequence. This is often done by using a set of functions that generate a unique positional signal that is added to the embedding of each token. With positional encoding, the model can preserve the order of the tokens and understand the sequence context. Transformer block A typical transformer model has multiple transformer blocks stacked together. Each transformer block has two main components: a multi-head self-attention mechanism and a position-wise feed-forward neural network. The self-attention mechanism enables the model to weigh the importance of different tokens within the sequence. It focuses on relevant parts of the input when making predictions. For instance, consider the sentences "Speak no lies" and "He lies down." In both sentences, the meaning of the word lies can’t be understood without looking at the words next to it. The words speak and down are essential to understand the correct meaning. Self-attention enables the grouping of relevant tokens for context. The feed-forward layer has additional components that help the transformer model train and function more efficiently. For example, each transformer block includes: Connections around the two main components that act like shortcuts. They enable the flow of information from one part of the network to another, skipping certain operations in between. Layer normalization that keeps the numbers—specifically the outputs of different layers in the network—inside a certain range so that the model trains smoothly. Linear transformation functions so that the model adjusts values to better perform the task it's being trained on—like document summary as opposed to translation. Linear and softmax blocks** Ultimately the model needs to make a concrete prediction, such as choosing the next word in a sequence. This is where the linear block comes in. It’s another fully connected layer, also known as a dense layer, before the final stage. It performs a learned linear mapping from the vector space to the original input domain. This crucial layer is where the decision-making part of the model takes the complex internal representations and turns them back into specific predictions that you can interpret and use. The output of this layer is a set of scores (often called logits) for each possible token. The softmax function is the final stage that takes the logit scores and normalizes them into a probability distribution. Each element of the softmax output represents the model's confidence in a particular class or token.
###Human:tell me about Transformers ###assistant: Transformers are a type of neural network architecture that transforms or changes an input sequence into an output sequence. They do this by learning context and tracking relationships between sequence components. For example, consider this input sequence: "What is the color of the sky?" The transformer model uses an internal mathematical representation that identifies the relevancy and relationship between the words color, sky, and blue. It uses that knowledge to generate the output: "The sky is blue." Organizations use transformer models for all types of sequence conversions, from speech recognition to machine translation and protein sequence analysis. Transformers process long sequences in their entirety with parallel computation, which significantly decreases both training and processing times. This has enabled the training of very large language models (LLM), such as GPT and BERT, that can learn complex language representations. They have billions of parameters that capture a wide range of human language and knowledge, and they’re pushing research toward more generalizable AI systems. Transformer is an architecture for transforming one sequence into another one with the help of two parts (Encoder and Decoder), but it differs from the previously described/existing sequence-to-sequence models because it does not imply any Recurrent Networks (GRU, LSTM, etc.). Step By Step method of inferencing: Input the full encoder sequence (French sentence) and as decoder input, we take an empty sequence with only a start-of-sentence token on the first position. This will output a sequence where we will only take the first element. That element will be filled into second position of our decoder input sequence, which now has a start-of-sentence token and a first word/character in it. Input both the encoder sequence and the new decoder sequence into the model. Take the second element of the output and put it into the decoder input sequence. Repeat this until you predict an end-of-sentence token, which marks the end of the translation. Self Attention Mechanism: Transformer models modify this process by incorporating something called a self-attention mechanism. Instead of processing data in order, the mechanism enables the model to look at different parts of the sequence all at once and determine which parts are most important. Imagine that you're in a busy room and trying to listen to someone talk. Your brain automatically focuses on their voice while tuning out less important noises. Self-attention enables the model do something similar: it pays more attention to the relevant bits of information and combines them to make better output predictions. This mechanism makes transformers more efficient, enabling them to be trained on larger datasets. It’s also more effective, especially when dealing with long pieces of text where context from far back might influence the meaning of what's coming next. Blocks of a Transformer The Encoder is on the left and the Decoder is on the right. Both Encoder and Decoder are composed of modules that can be stacked on top of each other multiple times, which is described by Nx in the figure. We see that the modules consist mainly of Multi-Head Attention and Feed Forward layers. The inputs and outputs (target sentences) are first embedded into an n-dimensional space since we cannot use strings directly. One slight but important part of the model is the positional encoding of the different words. Since we have no recurrent networks that can remember how sequences are fed into a model, we need to somehow give every word/part in our sequence a relative position since a sequence depends on the order of its elements. These positions are added to the embedded representation (n-dimensional vector) of each word. Input embeddings This stage converts the input sequence into the mathematical domain that software algorithms understand. At first, the input sequence is broken down into a series of tokens or individual sequence components. For instance, if the input is a sentence, the tokens are words. Embedding then transforms the token sequence into a mathematical vector sequence. The vectors carry semantic and syntax information, represented as numbers, and their attributes are learned during the training process. You can visualize vectors as a series of coordinates in an n-dimensional space. As a simple example, think of a two-dimensional graph, where x represents the alphanumeric value of the first letter of the word and y represents their categories. The word banana has the value (2,2) because it starts with the letter b and is in the category fruit. The word mango has the value (13,2) because it starts with the letter m and is also in the category fruit. In this way, the vector (x,y) tells the neural network that the words banana and mango are in the same category. Now imagine an n-dimensional space with thousands of attributes about any word's grammar, meaning, and use in sentences mapped to a series of numbers. Software can use the numbers to calculate the relationships between words in mathematical terms and understand the human language model. Embeddings provide a way to represent discrete tokens as continuous vectors that the model can process and learn from. Positional encoding Positional encoding is a crucial component in the transformer architecture because the model itself doesn’t inherently process sequential data in order. The transformer needs a way to consider the order of the tokens in the input sequence. Positional encoding adds information to each token's embedding to indicate its position in the sequence. This is often done by using a set of functions that generate a unique positional signal that is added to the embedding of each token. With positional encoding, the model can preserve the order of the tokens and understand the sequence context. Transformer block A typical transformer model has multiple transformer blocks stacked together. Each transformer block has two main components: a multi-head self-attention mechanism and a position-wise feed-forward neural network. The self-attention mechanism enables the model to weigh the importance of different tokens within the sequence. It focuses on relevant parts of the input when making predictions. For instance, consider the sentences "Speak no lies" and "He lies down." In both sentences, the meaning of the word lies can’t be understood without looking at the words next to it. The words speak and down are essential to understand the correct meaning. Self-attention enables the grouping of relevant tokens for context. The feed-forward layer has additional components that help the transformer model train and function more efficiently. For example, each transformer block includes: Connections around the two main components that act like shortcuts. They enable the flow of information from one part of the network to another, skipping certain operations in between. Layer normalization that keeps the numbers—specifically the outputs of different layers in the network—inside a certain range so that the model trains smoothly. Linear transformation functions so that the model adjusts values to better perform the task it's being trained on—like document summary as opposed to translation. Linear and softmax blocks** Ultimately the model needs to make a concrete prediction, such as choosing the next word in a sequence. This is where the linear block comes in. It’s another fully connected layer, also known as a dense layer, before the final stage. It performs a learned linear mapping from the vector space to the original input domain. This crucial layer is where the decision-making part of the model takes the complex internal representations and turns them back into specific predictions that you can interpret and use. The output of this layer is a set of scores (often called logits) for each possible token. The softmax function is the final stage that takes the logit scores and normalizes them into a probability distribution. Each element of the softmax output represents the model's confidence in a particular class or token.
Dataset Management
Data Version Control DVC is a version control system for data and machine learning projects. It helps you track changes to your datasets, manage experiments, and collaborate with team members. Here's how you can use DVC : 1. Setup DVC: Install DVC: Initialize a DVC project: 2. Version Control for Data: Add data to DVC: Push data to remote storage: 3. Version Control for Models: Add models: Version control models similar to data. 4. Manage Experiments: Use DVC to manage experiments, track metrics, and compare different versions of your model. 5. Collaborate: Use DVC to collaborate with team members. You can use a remote storage like AWS S3 or Google Cloud Storage to store your data and models. 6. Usage with S3: 1. To use S3 with DVC we need to set the remote storage for DVC to the specific S3 bucket. MLflow and DVC Integration !Model Lifecyle Backend Backend Implementation 1. The flow starts with fetching the datasets from a remote storage using data version control. DVC works on top of a scm . Here we are using git as scm. Whenever a dataset is added to dvc and is pushed to the remote storage , a .dvc file is created with information about the dataset[no of files, checksum etc]. 2. Once the data is pushed it will be git-ignored and only the .dvc files will be tracked in git. This makes sure that the integrity of data is maintained. 3. In this project we have chosen a pre-trained model and have trained it with our own custom dataset. The model chosen is yolov8n object detection and the custom dataset is vehicles. The process is as follows: 1. Data is pulled from remote storage using dvc. 2. Pulled data is fetched to the model using data.yaml file. 3. The model is trained using this data with some hyper parameter tuning like imgsz. 4. We have set the imgsz to be 640 which means all the images will be tuned to this size before passing it to the model. 5. Once the model is trained , it is pushed to mlflow along with the other metrics and parameters from the training. 6. The trained model is now validated using the validation dataset which is also pulled from remote storage using dvc. 7. It is now used for prediction. 4. This is an simple implementation for the backend of our model lifecylce which will be implemented in the further stages of model zoo. So here I have chosen the ultralytics backend for my python project/ we can have other backends too. 5. The metrics and other inference parameters will be logged in mlflow. Which we can use to integrate this backend with our UI . So user will have a custom range of models/pre-trained models which they can download and train it with their own dataset or they can use the data set we provide as well. 6. Since we have the ability to register the model in mlflow so we can pull the model from mlflow itself and use it with other API's for frontend implementation.
###Human:tell me about Dataset Management ###assistant: Data Version Control DVC is a version control system for data and machine learning projects. It helps you track changes to your datasets, manage experiments, and collaborate with team members. Here's how you can use DVC : 1. Setup DVC: Install DVC: Initialize a DVC project: 2. Version Control for Data: Add data to DVC: Push data to remote storage: 3. Version Control for Models: Add models: Version control models similar to data. 4. Manage Experiments: Use DVC to manage experiments, track metrics, and compare different versions of your model. 5. Collaborate: Use DVC to collaborate with team members. You can use a remote storage like AWS S3 or Google Cloud Storage to store your data and models. 6. Usage with S3: 1. To use S3 with DVC we need to set the remote storage for DVC to the specific S3 bucket. MLflow and DVC Integration !Model Lifecyle Backend Backend Implementation 1. The flow starts with fetching the datasets from a remote storage using data version control. DVC works on top of a scm . Here we are using git as scm. Whenever a dataset is added to dvc and is pushed to the remote storage , a .dvc file is created with information about the dataset[no of files, checksum etc]. 2. Once the data is pushed it will be git-ignored and only the .dvc files will be tracked in git. This makes sure that the integrity of data is maintained. 3. In this project we have chosen a pre-trained model and have trained it with our own custom dataset. The model chosen is yolov8n object detection and the custom dataset is vehicles. The process is as follows: 1. Data is pulled from remote storage using dvc. 2. Pulled data is fetched to the model using data.yaml file. 3. The model is trained using this data with some hyper parameter tuning like imgsz. 4. We have set the imgsz to be 640 which means all the images will be tuned to this size before passing it to the model. 5. Once the model is trained , it is pushed to mlflow along with the other metrics and parameters from the training. 6. The trained model is now validated using the validation dataset which is also pulled from remote storage using dvc. 7. It is now used for prediction. 4. This is an simple implementation for the backend of our model lifecylce which will be implemented in the further stages of model zoo. So here I have chosen the ultralytics backend for my python project/ we can have other backends too. 5. The metrics and other inference parameters will be logged in mlflow. Which we can use to integrate this backend with our UI . So user will have a custom range of models/pre-trained models which they can download and train it with their own dataset or they can use the data set we provide as well. 6. Since we have the ability to register the model in mlflow so we can pull the model from mlflow itself and use it with other API's for frontend implementation.
UX Data Feedback
Sample Layer wise metrics table for Squeezenet model Layer Ops (M) MaCs(M) CPU_Cycles(M) Energy(m) Time(m) conv_2d 21.3 166.8 34.5 2.1 0 concatenation 0 0 0 0 0 conv_2d 12.5 6.2 13.2 2.6 169.4 max_pool_2d 839.8K 0 0 0 0 conv_2d 28.1 14.0 21.9 4.4 281.4
###Human:tell me about UX Data Feedback ###assistant: Sample Layer wise metrics table for Squeezenet model Layer Ops (M) MaCs(M) CPU_Cycles(M) Energy(m) Time(m) conv_2d 21.3 166.8 34.5 2.1 0 concatenation 0 0 0 0 0 conv_2d 12.5 6.2 13.2 2.6 169.4 max_pool_2d 839.8K 0 0 0 0 conv_2d 28.1 14.0 21.9 4.4 281.4
UX Feedback and Action Items
Sample Layer wise metrics table for Layer Ops (M) MaCs(M) CPU_Cycles(M) Energy(m) Time(m) conv_2d 21.3 166.8 34.5 2.1 0 concatenation 0 0 0 0 0 conv_2d 12.5 6.2 13.2 2.6 169.4 max_pool_2d 839.8K 0 0 0 0 conv_2d 28.1 14.0 21.9 4.4 281.4
###Human:tell me about UX Feedback and Action Items ###assistant: Sample Layer wise metrics table for Layer Ops (M) MaCs(M) CPU_Cycles(M) Energy(m) Time(m) conv_2d 21.3 166.8 34.5 2.1 0 concatenation 0 0 0 0 0 conv_2d 12.5 6.2 13.2 2.6 169.4 max_pool_2d 839.8K 0 0 0 0 conv_2d 28.1 14.0 21.9 4.4 281.4
Dataset with labels for Benchmark models and run MLPerf Tiny
MLPerf Tiny - Performance mode MLPerf Tiny - Energy mode
###Human:tell me about Dataset with labels for Benchmark models and run MLPerf Tiny ###assistant: MLPerf Tiny - Performance mode MLPerf Tiny - Energy mode
Build and Flash
Pre-Requisites Install the following development and tools software required to work with STM32 MCU. Install latest versions of these SW STM32CubeMX STM32CubeIDE STM32CubeProg STM32Cube.AI Code pull and build instructions 1. Code pull: git clone [email protected]:stm32/nucleo_l4r5zi_p.git 2. Open .cproject file in STM32 Cube IDE and all the source code will be opened in the IDE (CubeIDE is built on top of Eclipse IDE) 3. Two flavors of builds are available - Debug and Release. Debug build is preferred can be used with breakpoints and GDB. 4. Configure the workspace with following settings. NOTE THAT FOLLOWING CONFIGS ARE ALREADY DONE AND THE BELOW STEPS ARE TO ENSURE AND FOR INFORMATION. 1. Right click on the project and click properties or Alt + Enter, following window will be opened 2. Select C/C++ Build and Settings option 3. 4. Select MCU Settings sub-option and ensure “Use float with printf from newlib-nano is enabled 5. Got to MCU GCC Assembler and select “Include paths” and ensure “tinyml_api” directory is present 6. Got to MCU GCC Compiler and select “Include paths” and ensure “tinyml_api” directory is included 7. Go to C/C++ General properties and select “Paths and Symbols” settings, Ensure tinyml_api is included in the “Includes” tab and “Source Location” tab 8. Go to “Symbols” tab and configure a new CFLAG “EE_CFG_ENERGY_MODE” and set it as 0 for Performance mode and set it as 1 for Energy mode. 9. Click “Apply and close” 5. Select Debug option and code will start to build, it will take approximately 2 mins to build. 6. Debug folder will be created in the workspace location and it will have elf and bin files which needs to be flashed in STM32 7. NOTE: IOC files can be opeend in CubeMX to configure the peripherals of the STM32 Nucleo board and the code will be generated based on the configs (e.g. Timer peripheral config, UART config etc.) Image flashing and debugging STM32 Cube IDE provides as in-built option to flash the image that has been built. Connect the STM32 board with type B USB cable and ensure it’s powered up. Note that STM32 nucleo board has STLink programmer board (SWD) for flashing and debugging. 1. In STM32 Cube IDE, click the Play or Debug buttons and select Debug As or Run As option and select STM32 C/C++ Application 2. References 1. 2. 3.
###Human:tell me about Build and Flash ###assistant: Pre-Requisites Install the following development and tools software required to work with STM32 MCU. Install latest versions of these SW STM32CubeMX STM32CubeIDE STM32CubeProg STM32Cube.AI Code pull and build instructions 1. Code pull: git clone [email protected]:stm32/nucleo_l4r5zi_p.git 2. Open .cproject file in STM32 Cube IDE and all the source code will be opened in the IDE (CubeIDE is built on top of Eclipse IDE) 3. Two flavors of builds are available - Debug and Release. Debug build is preferred can be used with breakpoints and GDB. 4. Configure the workspace with following settings. NOTE THAT FOLLOWING CONFIGS ARE ALREADY DONE AND THE BELOW STEPS ARE TO ENSURE AND FOR INFORMATION. 1. Right click on the project and click properties or Alt + Enter, following window will be opened 2. Select C/C++ Build and Settings option 3. 4. Select MCU Settings sub-option and ensure “Use float with printf from newlib-nano is enabled 5. Got to MCU GCC Assembler and select “Include paths” and ensure “tinyml_api” directory is present 6. Got to MCU GCC Compiler and select “Include paths” and ensure “tinyml_api” directory is included 7. Go to C/C++ General properties and select “Paths and Symbols” settings, Ensure tinyml_api is included in the “Includes” tab and “Source Location” tab 8. Go to “Symbols” tab and configure a new CFLAG “EE_CFG_ENERGY_MODE” and set it as 0 for Performance mode and set it as 1 for Energy mode. 9. Click “Apply and close” 5. Select Debug option and code will start to build, it will take approximately 2 mins to build. 6. Debug folder will be created in the workspace location and it will have elf and bin files which needs to be flashed in STM32 7. NOTE: IOC files can be opeend in CubeMX to configure the peripherals of the STM32 Nucleo board and the code will be generated based on the configs (e.g. Timer peripheral config, UART config etc.) Image flashing and debugging STM32 Cube IDE provides as in-built option to flash the image that has been built. Connect the STM32 board with type B USB cable and ensure it’s powered up. Note that STM32 nucleo board has STLink programmer board (SWD) for flashing and debugging. 1. In STM32 Cube IDE, click the Play or Debug buttons and select Debug As or Run As option and select STM32 C/C++ Application 2. References 1. 2. 3.
Nucleo L4R5ZI-P
MLCommons used Nucleo L4R5ZI as a reference for running the benchmark AI models for measuring the performance (Latency / Throughput, Accuracy) However Nucleo L4R5ZI wasn’t available, procuedNucleo L4R5ZI-P alternatively to run the MLPerf Tiny benchmark models. This helps to be a baseline reference and gives good understanding of this framework https://www.st.com/en/evaluation-tools/nucleo-l4r5zi-p.html
###Human:tell me about Nucleo L4R5ZI-P ###assistant: MLCommons used Nucleo L4R5ZI as a reference for running the benchmark AI models for measuring the performance (Latency / Throughput, Accuracy) However Nucleo L4R5ZI wasn’t available, procuedNucleo L4R5ZI-P alternatively to run the MLPerf Tiny benchmark models. This helps to be a baseline reference and gives good understanding of this framework https://www.st.com/en/evaluation-tools/nucleo-l4r5zi-p.html
YOLACT Model Creation
Currently STM32N6 code has an assertion that the neural network input from the video display is height==width. I removed this assertion and was able to run with a 480x640 model, but performance was around 3.5 FPS. So instead built a new 480x480 model and we are getting around 13.3 FPS.
###Human:tell me about YOLACT Model Creation ###assistant: Currently STM32N6 code has an assertion that the neural network input from the video display is height==width. I removed this assertion and was able to run with a 480x640 model, but performance was around 3.5 FPS. So instead built a new 480x480 model and we are getting around 13.3 FPS.
STMicro STM32N6 Image Segmentation
STM has a new processor the N6 which is an ARM Cortex-M55 + STM proprietary NPU integrated. Our team has been asked to integrate a YOLOv8 Image Segmentation demo on the NPU as a demonstration. STM’s development board has an integrated MIPI CSI camera (IMX335) and output display. The SOC has an integrated video processing mechanism which can simultaneously send the camera output to the display as well as to the neural network input at the same time. The video output is a high FPS display, and the output AI inference can be overlayed on top.
###Human:tell me about STMicro STM32N6 Image Segmentation ###assistant: STM has a new processor the N6 which is an ARM Cortex-M55 + STM proprietary NPU integrated. Our team has been asked to integrate a YOLOv8 Image Segmentation demo on the NPU as a demonstration. STM’s development board has an integrated MIPI CSI camera (IMX335) and output display. The SOC has an integrated video processing mechanism which can simultaneously send the camera output to the display as well as to the neural network input at the same time. The video output is a high FPS display, and the output AI inference can be overlayed on top.
Plan
Learning Courses & Certifications Course Due Enrolled/Completed Kaggle Tutorials (Learn section)nnIntro to ML nIntermediate ML nFeature Engineering Ragul - StartednAnees - StartednSuraj - StartednKeerthana - Started Machine Learning Engineering for Production (MLOps) Specialization - Deeplearning.ain Jenani - Started Machine Learning Operations (MLOps): Getting Started - Googlen Jenani - Started Andrew NG -ML Specialization (Certification) Andrew NG - AI for everyone (Certification) October Suraj - Started Keerthana - StartednJayaprakash - Started FAST.AI - Intro to ML for codersn October Suraj - Started Fast.Ai - Deep learning Google - Tensorflow for ML (Certification) December AWS Sagemaker Training AWS Data analyst Training (Certification) December Ragul - Todo AWS Machine Learning (Certification)nn December Ragul - Startedn --- Projects Astral Client Fingerprinting Model updates to reduce time taken for prediction Model updates to improve accuracy after validation. Action Items Production Deployment with Data Collector @Jenani Sampath @Surajram Srinivasan -- Done Improve Training data set & Improve coverage with more data @Anees Aslam -- Done Batch processing of prediction @Anees Aslam Data Analysis Report on Histogram/Distribution of Brand/Model/Type/OS @Anees Aslam -- Done Report on Count of Device types/brand/model/os in different circles/regions, aggregate report in different circles how many airtel provided Android set-top boxes. How many laptops, … Improvement to ML Model 1. FP Prediction time reduction: @Anees Aslam -- Done Model Loading: By reconfiguring model file with pickle (inbuilt) instead of Joblib module. Initializing Constructor with static elements, reducing frequent re-loading. ML Inference: Parallelized Device Model search, responsible for 25% of time elapsed. Around 40% of Response time is used for OUI lookup, which is replaced with alternative module (Database by Wire-Shark and IEEE) RESULT: The time elapsed for entire FP Prediction is reduced by 10x Earlier time taken for devices with 231 clients without cache was about 26 seconds. Currently it takes with the latest model updates without cache is around 3 seconds.n 2. Prediction Accuracy: @Anees Aslam -- Done DHCP Opts : DHCP Request Options based detection is included for accurate detection. Increase coverage for more type (Printer, TV, Refrigerator, Router, Audio Devices) Improve base dataset, was updated to remove dependencies. Reduce False prediction by updating novelty (measure of unseen) detection after each instance. 3. Reducing Unknowns @Anees Aslam @Jayaprakash Masilamani -- Done The SVM Predicts “Unknown” for Brand labels that are unseen on Training set. To Reduce it, we need to retrain the ML model on wider dataset. Latest model 3rd Jan (Including ZTE) Count of UnknownsnAll Unknown --> 12,276,554nOnly Type --> 15,091,953nOnly Model --> 24,308,828nOnly Brand ---> 18,983,363nOnly OS --> 15,386,236 Coverage nType --> 61.35%nModel --> 37.75%nBrand --> 51.39%nOS --> 60.60% Non ZTEnCount of UnknownsnAll Unknown --> 5,572,761 Only Type --> 6,968,605nOnly Model --> 16,043,192nOnly Brand ---> 12,245,312nOnly OS --> 7,246,233 Coverage nType --> 77.31%nModel --> 47.78%nBrand --> 60.14%nOS --> 76.41% Hence the following architecture to be implemented: 4. Unique client id -- Identified Corner Cases by hitting the client API : Same device is connected to wifi as well as ethernet in the same timestamp -- to be identified as unique device Same device is connected to wifi via both radio_24 as well as radio_5 frequencies Grouped the report based on customer and handled the above cases in order to assign a unique client id (UCID) , which will be helpful to identify a specific device within a customer Each client in a customer group will be assigned a UCID in the format Identified the similar device count based on grouping each client by customer and timestamp. Have to look if any other parameter like IElist could help identify if the grouped clients are unique or different. Identified 136166 groups of customers with similar device info varying in count range (0,1,2,3,) etc. Identified 26204181 unique devices as well as generated UCID based on above logic 5. Conversion of first layer heuristics and OUI Lookup from fp_ml service(python) to fingerprint service(Go) -- @Jayaprakash Masilamani Conversion of Heuristics layer from python to Go: Understand the Heuristics Logic:nReview the existing Python code for the heuristics layer and understand the logic and conditions applied. Design the Equivalent Logic in Go:nWrite the equivalent logic in Go. Be sure to replicate the conditions and determinism of the original heuristics layer. Refactor and Optimize:nTake advantage of Go's language features and libraries to refactor and optimize the code. Ensure that the performance and behavior match the original Python implementation. Testing:nThoroughly test the Go implementation to ensure it produces the same deterministic output as the Python implementation. Use test cases that cover various scenarios. Integration with the Rest of the System:nIntegrate the Go-based heuristics layer into your overall system alongside the existing Python-based SVM model. Milestone ETA Status First Release with reasonable Accuracy (ML Model & Index lookup) 27 Sep 2023 Done Performance Improvement Release 5 Oct 2023 Done ~~Ready for Staging Deployment & Demo~~ ~~10 Oct 2023~~ ~~On Track~~ ~~Simpler ML Model (Using SVM)~~ ~~12 Oct 2023~~ ~~On Track~~ Simpler ML Model (Using SVM) 3 Nov 2023 Done Ready for Staging Deployment & Demo 3 Nov 2023 Done Data Collector Service for collecting unknown data 17 Nov 2023 Done Continuous Accuracy Improvement with Manual updates and Retraining 24 Nov 2023 On Track Prod Deployment 21 Nov 2023 Done Prod Deployment with model updates Phase 1 18th Dec 2023 Done Prod Deployment with model updates and time reduction Phase 2 10th Jan 2024 Done Model Version MLflow link QA Accuracy % 5.7 Brand: 76.92nModel: 76.92nOS: 100nType: 90.76 5.6 Brand: 74.28nModel: 65.49nOS: 83.18nType: 81.31 5.5 Brand: 70.99nModel: 65.11nOS: 84.57nType: 81.76 Data Model Version Device Coverage% Model Accuracy % QA Accuracy % Old Astral Sample (Sep)n1) Total Device Entries: 7.266959 ~= 7.2 Mn2) Unique Devices: 3.395551 ~= 3.3Mn3) Excluding Private MAC - Unique Device: 5,56,948 n 5.0.0 Type: 77.88%nModel: 44.29%nBrand: 96.10%nOS: 77.88% nAll: 96.46% Type: 94.21%nModel: 87.5% nBrand: 80 %nOS: 94.21% n Type: 73.68%nModel: 52.6%nBrand: 52.6%nOS: 73.68% Test cases: 38 14 Nov (4pm) - 15 Nov(4pm): 1) Total Device Entries: 166210937 ~= 166.21Mn2) Unique Devices: 34,593,523 ~= 34.59Mn3) Excluding Private MAC - Unique Device: 16633582 ~= 16.63 M 5.2.0 Type: 51.92%nModel: 33.79%nBrand: 69.05%nOS: 50.95%nAll: 80.49% Type: 96.1%nModel:87.5% nBrand: 93.6%nOS: 96.1%n Type: 67.5%nModel: 42.5%nBrand:77.5%nOS: 70% Test cases: 41n 18 Nov (12 AM) - 19 Nov (12pm): 1) Total Device Entries: 167108433 ~=167.10 Mn2) Unique Devices: 35335400 ~= 35.34 Mn3) Excluding Private MAC - Unique Device: 16739786 ~= 16.73 M 5.3.0nnnnnnnnnnnnnnnnnnnnnn5.4.1(With dhcp opts) Type: 51.78% Model: 34.29% Brand: 92.95% (includes devices with brand prediction not possible)nOS: 51.11%nAll: 96.32% Type: 96.1%nModel: 87.5 %nBrand: 96.9%nOS: 96.1%n I)(24-11-23)nType: 90.74%nModel: 48.95%nBrand: 86.86%nOS: 92.83%nTest cases:335nII)(29-11-23)nType: 87.82%nModel: 51.89%nBrand: 65.66%nOS: 92.01%nTest cases:501nnnnType: 84.98%nModel: 61.5%nBrand: 58.22%nOS: 77%nTest cases:213 10th Dec 2023 Unique Devices: 21,051,198 5.4.2 (Deployed to Airtel) Type: 51.46%nModel: 33.98%nBrand: 45.50%nOS: 51.10% All: 65.90% Type: 85.56%nModel: 61.2%nBrand: 78.46%nOS: 85.8%nTest cases: Unit817 10th Dec 2023 Unique Devices: 21,051,198 5.4.3 (Fix by Analysis) Type: 51.43%nModel: 32.80%nBrand: 54.19%nOS: 51.07% All: 65.90% 10th Dec 2023 Unique Devices: 21,051,198 5.5 (Latest) Type: 60.43% ⬆️nModel: 39.94% ⬆️nBrand: 50.53% (Increased Accuracy by Decreasing False Negative).nOS: 60.05% ⬆️ All: 65.90% Type: 81.76%nModel: 65.12%nBrand: 70.99%nOS: 84.58%nTest cases: 817 3rd Jan 2023 Unique Devices: 39,056,099 5.5 (Latest) Type: 59.56%nModel: 37.92%nBrand: 49.02%nOS: 59.16% All: 64.77% 3rd Jan 2023 nnNon ZTE Devices: 30,722,532nnnn_nZTE Devices:n8,333,567nn+ Hostname: 0n+ DHCP Opts: 0n+ DHCP Vendor: 0n+ OUI Mac: 3,354,425 (40.2%) n 5.5 (Latest) Non ZTE Devices: nnType: 75.72% Model: 48.21%nBrand: 59.51%nOS: 75.21% All: 79.52%n_nZTE Devices Prediction:nnType: 0% nModel: 0%nBrand: 10.37%nOS: 0%nAll: 10.37% 3rd Jan 2023 nnNon ZTE Devices: 30,722,532nnnn_nZTE Devices:n8,333,567nn+ Hostname: 0n+ DHCP Opts: 0n+ DHCP Vendor: 0n+ OUI Mac: 3,354,425 (40.2%) 5.6 (Latest) Non ZTE Devices: nnType: 77.31% Model: 47.78%nBrand: 60.14%nOS: 76.41% n_nIncluding ZTE Devices Prediction:nnType: 61.35% Model: 37.75%nBrand: 51.39%nOS: 60.60% Type: 81.31%nModel: 65.48%nBrand: 74.28%nOS: 83.18%nTest cases: 910 Feb 6thnUnique Devices (UCID)n29,478,109nnnNon ZTE Devicesn25,727,645 5.6 Type: 70.85% ⬆️nBrand: 59.7% ⬆️nModel: 40.9% ⬆️nOS:69.6% ⬆️nnNon ZTEnType: 80.3% ⬆️nBrand: 62.2% ⬆️nModel: 46.6%nOS: 79.08% ⬆️ March 6thnUnique Devices (UCID)n26,592,384nnNon ZTE Devicesn22,883,547n 5.6 Type :73.45%nBrand :62.04%nModel :43.23%nOS :72.67%nnnNon ZTE DevicesnType:84.61%nBrand:65.09%nModel : 49.99%nOS : 83.75% March 6thnUnique Devices (UCID)n26,592,384nnNon ZTE Devicesn22,883,547 5.7(brand predict from parsed IE list) Type: 73.9% ⬆️nBrand:62.9% ⬆️nModel: 38.1%nOS: 72.5% nnNon ZTEnType:85.2% ⬆️nBrand: 65.5% ⬆️nModel: 44.05%nOS:83.6% Type: 90.76%nModel: 76.92%nBrand: 76.92%nOS: 100%nTest cases: 65n April 20thnUnique Devices (UCID)n29,346,895nnNon ZTE Devicesn24,146,417 5.7(removed DIX for non-ZTE calculation) Type: 69.4% nBrand:60.9% nModel: 40.1% ⬆️nOS: 67.8% nnNon ZTEnType:83.3% nBrand: 64.3% nModel: 48.4% ⬆️nOS:81.4% April 20thnUnique Devices (UCID)n29,346,895nnNon ZTE Devicesn24,146,417 5.8(hostname based pred model improvement and ie list hex model predict for Apple) Type: 69.7% ⬆️nBrand:70.6% ⬆️nModel: 60.9% ⬆️nOS: 64.8% nnNon ZTEnType:83.3% nBrand: 76.2% ⬆️nModel: 63.3% ⬆️nOS:77.5% Help: Total Device Entries: No. of Client rows collected for the entire 24 hours (without removing duplicates) Unique Devices: After removing duplicates on mac, dhcp_opts, dhcp_vendor, hostname Excluding Private MAC: Unique Device: Replace Anonymous mac with X, and remove duplicates on MAC, dhcp_opts, dhcp_vendor, hostname Device Coverage %: No. of not Unknown values in dataset in individual columns ALL: coverage across all columns. if all the column is unknown, that device is not covered. Model Accuracy %: Accuracy from the Training Data set QA Accuracy %: Accuracy from QA Test plan Tiny ML @Keerthana Sundarbabu Action Items: 1. The research paper is based on how moving the ml model from cloud to edge is beneficial -- in-progress 2. Compare training and inference process for call-in prediction on Cloud and Edge device -- done namely the cpu overhead, memory utilization, cpu time. Optimizing the ML model by training the model with top 150 important features. -- done 3. Compare the different models performance like catboost , xgboost and decision tree. -- done Link to paper: Synaptics Action Items: syNAP Build environment with Docker @Jenani Sampath -- In Progress Work with Aravindh to bring up the board Data Analysis: Action Items: Generate reports for questions in . Work on collecting data for recent dates and run reports for recent data. Work on reducing time taken for report generation. Airtel Call in Tracking & Prediction Predicting possible Call In from customers on problems. Analyzing problems in Wi-Fi based on customer data. Recommended actions. Action Items 1. Data Collection from APISIX logs 2. EDA on data for identifying patterns. 3. Plan with Self Heal UCID DETECTION 1. --- QA Focus Area Data Quality Check Feature Important Analysis Cross Validation Performance Metrics Confusion Matrix Testing on New Data Model Interpretability Monitor Model Performance A/B Testing Action Items: -- DONE QA Testing for Model (v5.1 - 9 Nov) @Jayaprakash Masilamani Accuracy & Coverage Numbers Other metrics for Model & Dataset quality Deepchecks Test Suite Results for the newly added features have been logged in MLflow. nAttaching the MLflow experiment run link below:n Unit Testing results logged in MLflow for every model improvement:nRefer the below link which has metrics and results as artifacts for the latest model(V5.5.0) -n Observations of Load Testing for the Client List API done on AirTel-QA-Phase2: n Observations of Load Testing for the Client List API done on Airtel-Prod so far:n Unit Test on latest models after improvements on reducing unknowns and including STB- @Jayaprakash Masilamani 1. Brand accuracy has Increased from 69.89% to 74.31% 2. Os accuracy increased from 83% to 83.06% 3. Type accuracy increased from 79.44% to 81.4% 4. Model accuracy has increased from 62.22% to 65.45% --- Infra & Ops GPU Server Identify how to Use GPU Server effectively Jupyter Hub for every Team (running in Docker) -- Done Dev sandboxes which needs GPU Access, should be spun up by Devops using Docker containers. -- Done GPU Models to be built using Docker -- Done CI Pipelines (DVC, CML git based pipelines, used for training GPU models) [Fingerprint - Done] Grafana for monitoring - Done Postgres cluster for load balancing - Done Metabase Setup - Done Presto - Done Presto With Hive - Done Data collection for JAN 3rd - Done Hive-presto for production Deployment - Done Action Items Migration of mlflow from AWS to GPU server -- In progress Milestone Date Status Dev Sandbox with Automation for managing. 13 OCT 2023 Done JupyterHub with Automation for managing. 20 OCT 2023 Done Gitops Integration for GPU Training & builds 27 OCT 2023 Done Grafana & Monitoring In Review ML Ops DVC CML (CI Pipeline) DVC Studio ML Flow integration Streamlit for ML Web app integration. - [ ] Add Docs on ML Ops Process --- Experiments Traffic Analysis Security - Anomaly Detection Unique Client Identifier (UCID) Telecom Provider identification Action Items 1. Data Collection on Various personal devices (Pcap) @Keerthana Sundarbabu @Surajram Srinivasan 2. EDA @Surajram Srinivasan @Keerthana Sundarbabu 1. Find traffic patterns helpful for uniquely identifying clients. 2. Identify features that can be used to creating an ML model. 3. Build ML Model & UCID Logic Potential Ideas Traffic Analysis in Edge - eg. Device ID Fingerprinting https://arxiv.org/pdf/2211.01963.pdf https://iopscience.iop.org/article/10.1088/1742-6596/1963/1/012046/meta https://www.scitepress.org/Papers/2022/112605/112605.pdf https://scholarworks.unr.edu/bitstream/handle/11714/6021/Aksoy_unr_0139D_12965.pdf?sequence=1&isAllowed=y https://www.sciencedirect.com/science/article/pii/S2665963822001300 https://www.researchgate.net/publication/348756665_Machine_Learning_for_the_Detection_and_Identification_of_Internet_of_Things_IoT_Devices_A_Survey https://mayanknauni.com/?p=4392 https://www.mdpi.com/2078-2489/14/6/320 IoT Cloud Edge Capability Tensorflow based Edge ML Execution Initial Demo done on Edge ML Execution On Hold LLM (Hold) Experiment Opensource LLMs AI Commit - On Prem (Hold) A CPU based llama model, which can generate a summary of Git Commit based on git diff, which can be used by embedUR developers onprem without compromising any sensitive data. This is a Generate AI usecases, and help us to understand what it takes to run LLMs at scale. Body Count Estimation from RSSI Milestone Date Progress Research & Plan (as Doc in Wiki) Done Experiment Runs In Progress References 1. Synaptics Priority: 1. Tiny ML 2. Traffic AnalysisnSecurity, Client Identification 3. Airtel Call In EDA Next Plan: Customization in Superset. Attendance system POC. Coral AI explore
###Human:tell me about Plan ###assistant: Learning Courses & Certifications Course Due Enrolled/Completed Kaggle Tutorials (Learn section)nnIntro to ML nIntermediate ML nFeature Engineering Ragul - StartednAnees - StartednSuraj - StartednKeerthana - Started Machine Learning Engineering for Production (MLOps) Specialization - Deeplearning.ain Jenani - Started Machine Learning Operations (MLOps): Getting Started - Googlen Jenani - Started Andrew NG -ML Specialization (Certification) Andrew NG - AI for everyone (Certification) October Suraj - Started Keerthana - StartednJayaprakash - Started FAST.AI - Intro to ML for codersn October Suraj - Started Fast.Ai - Deep learning Google - Tensorflow for ML (Certification) December AWS Sagemaker Training AWS Data analyst Training (Certification) December Ragul - Todo AWS Machine Learning (Certification)nn December Ragul - Startedn --- Projects Astral Client Fingerprinting Model updates to reduce time taken for prediction Model updates to improve accuracy after validation. Action Items Production Deployment with Data Collector @Jenani Sampath @Surajram Srinivasan -- Done Improve Training data set & Improve coverage with more data @Anees Aslam -- Done Batch processing of prediction @Anees Aslam Data Analysis Report on Histogram/Distribution of Brand/Model/Type/OS @Anees Aslam -- Done Report on Count of Device types/brand/model/os in different circles/regions, aggregate report in different circles how many airtel provided Android set-top boxes. How many laptops, … Improvement to ML Model 1. FP Prediction time reduction: @Anees Aslam -- Done Model Loading: By reconfiguring model file with pickle (inbuilt) instead of Joblib module. Initializing Constructor with static elements, reducing frequent re-loading. ML Inference: Parallelized Device Model search, responsible for 25% of time elapsed. Around 40% of Response time is used for OUI lookup, which is replaced with alternative module (Database by Wire-Shark and IEEE) RESULT: The time elapsed for entire FP Prediction is reduced by 10x Earlier time taken for devices with 231 clients without cache was about 26 seconds. Currently it takes with the latest model updates without cache is around 3 seconds.n 2. Prediction Accuracy: @Anees Aslam -- Done DHCP Opts : DHCP Request Options based detection is included for accurate detection. Increase coverage for more type (Printer, TV, Refrigerator, Router, Audio Devices) Improve base dataset, was updated to remove dependencies. Reduce False prediction by updating novelty (measure of unseen) detection after each instance. 3. Reducing Unknowns @Anees Aslam @Jayaprakash Masilamani -- Done The SVM Predicts “Unknown” for Brand labels that are unseen on Training set. To Reduce it, we need to retrain the ML model on wider dataset. Latest model 3rd Jan (Including ZTE) Count of UnknownsnAll Unknown --> 12,276,554nOnly Type --> 15,091,953nOnly Model --> 24,308,828nOnly Brand ---> 18,983,363nOnly OS --> 15,386,236 Coverage nType --> 61.35%nModel --> 37.75%nBrand --> 51.39%nOS --> 60.60% Non ZTEnCount of UnknownsnAll Unknown --> 5,572,761 Only Type --> 6,968,605nOnly Model --> 16,043,192nOnly Brand ---> 12,245,312nOnly OS --> 7,246,233 Coverage nType --> 77.31%nModel --> 47.78%nBrand --> 60.14%nOS --> 76.41% Hence the following architecture to be implemented: 4. Unique client id -- Identified Corner Cases by hitting the client API : Same device is connected to wifi as well as ethernet in the same timestamp -- to be identified as unique device Same device is connected to wifi via both radio_24 as well as radio_5 frequencies Grouped the report based on customer and handled the above cases in order to assign a unique client id (UCID) , which will be helpful to identify a specific device within a customer Each client in a customer group will be assigned a UCID in the format Identified the similar device count based on grouping each client by customer and timestamp. Have to look if any other parameter like IElist could help identify if the grouped clients are unique or different. Identified 136166 groups of customers with similar device info varying in count range (0,1,2,3,) etc. Identified 26204181 unique devices as well as generated UCID based on above logic 5. Conversion of first layer heuristics and OUI Lookup from fp_ml service(python) to fingerprint service(Go) -- @Jayaprakash Masilamani Conversion of Heuristics layer from python to Go: Understand the Heuristics Logic:nReview the existing Python code for the heuristics layer and understand the logic and conditions applied. Design the Equivalent Logic in Go:nWrite the equivalent logic in Go. Be sure to replicate the conditions and determinism of the original heuristics layer. Refactor and Optimize:nTake advantage of Go's language features and libraries to refactor and optimize the code. Ensure that the performance and behavior match the original Python implementation. Testing:nThoroughly test the Go implementation to ensure it produces the same deterministic output as the Python implementation. Use test cases that cover various scenarios. Integration with the Rest of the System:nIntegrate the Go-based heuristics layer into your overall system alongside the existing Python-based SVM model. Milestone ETA Status First Release with reasonable Accuracy (ML Model & Index lookup) 27 Sep 2023 Done Performance Improvement Release 5 Oct 2023 Done ~~Ready for Staging Deployment & Demo~~ ~~10 Oct 2023~~ ~~On Track~~ ~~Simpler ML Model (Using SVM)~~ ~~12 Oct 2023~~ ~~On Track~~ Simpler ML Model (Using SVM) 3 Nov 2023 Done Ready for Staging Deployment & Demo 3 Nov 2023 Done Data Collector Service for collecting unknown data 17 Nov 2023 Done Continuous Accuracy Improvement with Manual updates and Retraining 24 Nov 2023 On Track Prod Deployment 21 Nov 2023 Done Prod Deployment with model updates Phase 1 18th Dec 2023 Done Prod Deployment with model updates and time reduction Phase 2 10th Jan 2024 Done Model Version MLflow link QA Accuracy % 5.7 Brand: 76.92nModel: 76.92nOS: 100nType: 90.76 5.6 Brand: 74.28nModel: 65.49nOS: 83.18nType: 81.31 5.5 Brand: 70.99nModel: 65.11nOS: 84.57nType: 81.76 Data Model Version Device Coverage% Model Accuracy % QA Accuracy % Old Astral Sample (Sep)n1) Total Device Entries: 7.266959 ~= 7.2 Mn2) Unique Devices: 3.395551 ~= 3.3Mn3) Excluding Private MAC - Unique Device: 5,56,948 n 5.0.0 Type: 77.88%nModel: 44.29%nBrand: 96.10%nOS: 77.88% nAll: 96.46% Type: 94.21%nModel: 87.5% nBrand: 80 %nOS: 94.21% n Type: 73.68%nModel: 52.6%nBrand: 52.6%nOS: 73.68% Test cases: 38 14 Nov (4pm) - 15 Nov(4pm): 1) Total Device Entries: 166210937 ~= 166.21Mn2) Unique Devices: 34,593,523 ~= 34.59Mn3) Excluding Private MAC - Unique Device: 16633582 ~= 16.63 M 5.2.0 Type: 51.92%nModel: 33.79%nBrand: 69.05%nOS: 50.95%nAll: 80.49% Type: 96.1%nModel:87.5% nBrand: 93.6%nOS: 96.1%n Type: 67.5%nModel: 42.5%nBrand:77.5%nOS: 70% Test cases: 41n 18 Nov (12 AM) - 19 Nov (12pm): 1) Total Device Entries: 167108433 ~=167.10 Mn2) Unique Devices: 35335400 ~= 35.34 Mn3) Excluding Private MAC - Unique Device: 16739786 ~= 16.73 M 5.3.0nnnnnnnnnnnnnnnnnnnnnn5.4.1(With dhcp opts) Type: 51.78% Model: 34.29% Brand: 92.95% (includes devices with brand prediction not possible)nOS: 51.11%nAll: 96.32% Type: 96.1%nModel: 87.5 %nBrand: 96.9%nOS: 96.1%n I)(24-11-23)nType: 90.74%nModel: 48.95%nBrand: 86.86%nOS: 92.83%nTest cases:335nII)(29-11-23)nType: 87.82%nModel: 51.89%nBrand: 65.66%nOS: 92.01%nTest cases:501nnnnType: 84.98%nModel: 61.5%nBrand: 58.22%nOS: 77%nTest cases:213 10th Dec 2023 Unique Devices: 21,051,198 5.4.2 (Deployed to Airtel) Type: 51.46%nModel: 33.98%nBrand: 45.50%nOS: 51.10% All: 65.90% Type: 85.56%nModel: 61.2%nBrand: 78.46%nOS: 85.8%nTest cases: Unit817 10th Dec 2023 Unique Devices: 21,051,198 5.4.3 (Fix by Analysis) Type: 51.43%nModel: 32.80%nBrand: 54.19%nOS: 51.07% All: 65.90% 10th Dec 2023 Unique Devices: 21,051,198 5.5 (Latest) Type: 60.43% ⬆️nModel: 39.94% ⬆️nBrand: 50.53% (Increased Accuracy by Decreasing False Negative).nOS: 60.05% ⬆️ All: 65.90% Type: 81.76%nModel: 65.12%nBrand: 70.99%nOS: 84.58%nTest cases: 817 3rd Jan 2023 Unique Devices: 39,056,099 5.5 (Latest) Type: 59.56%nModel: 37.92%nBrand: 49.02%nOS: 59.16% All: 64.77% 3rd Jan 2023 nnNon ZTE Devices: 30,722,532nnnn_nZTE Devices:n8,333,567nn+ Hostname: 0n+ DHCP Opts: 0n+ DHCP Vendor: 0n+ OUI Mac: 3,354,425 (40.2%) n 5.5 (Latest) Non ZTE Devices: nnType: 75.72% Model: 48.21%nBrand: 59.51%nOS: 75.21% All: 79.52%n_nZTE Devices Prediction:nnType: 0% nModel: 0%nBrand: 10.37%nOS: 0%nAll: 10.37% 3rd Jan 2023 nnNon ZTE Devices: 30,722,532nnnn_nZTE Devices:n8,333,567nn+ Hostname: 0n+ DHCP Opts: 0n+ DHCP Vendor: 0n+ OUI Mac: 3,354,425 (40.2%) 5.6 (Latest) Non ZTE Devices: nnType: 77.31% Model: 47.78%nBrand: 60.14%nOS: 76.41% n_nIncluding ZTE Devices Prediction:nnType: 61.35% Model: 37.75%nBrand: 51.39%nOS: 60.60% Type: 81.31%nModel: 65.48%nBrand: 74.28%nOS: 83.18%nTest cases: 910 Feb 6thnUnique Devices (UCID)n29,478,109nnnNon ZTE Devicesn25,727,645 5.6 Type: 70.85% ⬆️nBrand: 59.7% ⬆️nModel: 40.9% ⬆️nOS:69.6% ⬆️nnNon ZTEnType: 80.3% ⬆️nBrand: 62.2% ⬆️nModel: 46.6%nOS: 79.08% ⬆️ March 6thnUnique Devices (UCID)n26,592,384nnNon ZTE Devicesn22,883,547n 5.6 Type :73.45%nBrand :62.04%nModel :43.23%nOS :72.67%nnnNon ZTE DevicesnType:84.61%nBrand:65.09%nModel : 49.99%nOS : 83.75% March 6thnUnique Devices (UCID)n26,592,384nnNon ZTE Devicesn22,883,547 5.7(brand predict from parsed IE list) Type: 73.9% ⬆️nBrand:62.9% ⬆️nModel: 38.1%nOS: 72.5% nnNon ZTEnType:85.2% ⬆️nBrand: 65.5% ⬆️nModel: 44.05%nOS:83.6% Type: 90.76%nModel: 76.92%nBrand: 76.92%nOS: 100%nTest cases: 65n April 20thnUnique Devices (UCID)n29,346,895nnNon ZTE Devicesn24,146,417 5.7(removed DIX for non-ZTE calculation) Type: 69.4% nBrand:60.9% nModel: 40.1% ⬆️nOS: 67.8% nnNon ZTEnType:83.3% nBrand: 64.3% nModel: 48.4% ⬆️nOS:81.4% April 20thnUnique Devices (UCID)n29,346,895nnNon ZTE Devicesn24,146,417 5.8(hostname based pred model improvement and ie list hex model predict for Apple) Type: 69.7% ⬆️nBrand:70.6% ⬆️nModel: 60.9% ⬆️nOS: 64.8% nnNon ZTEnType:83.3% nBrand: 76.2% ⬆️nModel: 63.3% ⬆️nOS:77.5% Help: Total Device Entries: No. of Client rows collected for the entire 24 hours (without removing duplicates) Unique Devices: After removing duplicates on mac, dhcp_opts, dhcp_vendor, hostname Excluding Private MAC: Unique Device: Replace Anonymous mac with X, and remove duplicates on MAC, dhcp_opts, dhcp_vendor, hostname Device Coverage %: No. of not Unknown values in dataset in individual columns ALL: coverage across all columns. if all the column is unknown, that device is not covered. Model Accuracy %: Accuracy from the Training Data set QA Accuracy %: Accuracy from QA Test plan Tiny ML @Keerthana Sundarbabu Action Items: 1. The research paper is based on how moving the ml model from cloud to edge is beneficial -- in-progress 2. Compare training and inference process for call-in prediction on Cloud and Edge device -- done namely the cpu overhead, memory utilization, cpu time. Optimizing the ML model by training the model with top 150 important features. -- done 3. Compare the different models performance like catboost , xgboost and decision tree. -- done Link to paper: Synaptics Action Items: syNAP Build environment with Docker @Jenani Sampath -- In Progress Work with Aravindh to bring up the board Data Analysis: Action Items: Generate reports for questions in . Work on collecting data for recent dates and run reports for recent data. Work on reducing time taken for report generation. Airtel Call in Tracking & Prediction Predicting possible Call In from customers on problems. Analyzing problems in Wi-Fi based on customer data. Recommended actions. Action Items 1. Data Collection from APISIX logs 2. EDA on data for identifying patterns. 3. Plan with Self Heal UCID DETECTION 1. --- QA Focus Area Data Quality Check Feature Important Analysis Cross Validation Performance Metrics Confusion Matrix Testing on New Data Model Interpretability Monitor Model Performance A/B Testing Action Items: -- DONE QA Testing for Model (v5.1 - 9 Nov) @Jayaprakash Masilamani Accuracy & Coverage Numbers Other metrics for Model & Dataset quality Deepchecks Test Suite Results for the newly added features have been logged in MLflow. nAttaching the MLflow experiment run link below:n Unit Testing results logged in MLflow for every model improvement:nRefer the below link which has metrics and results as artifacts for the latest model(V5.5.0) -n Observations of Load Testing for the Client List API done on AirTel-QA-Phase2: n Observations of Load Testing for the Client List API done on Airtel-Prod so far:n Unit Test on latest models after improvements on reducing unknowns and including STB- @Jayaprakash Masilamani 1. Brand accuracy has Increased from 69.89% to 74.31% 2. Os accuracy increased from 83% to 83.06% 3. Type accuracy increased from 79.44% to 81.4% 4. Model accuracy has increased from 62.22% to 65.45% --- Infra & Ops GPU Server Identify how to Use GPU Server effectively Jupyter Hub for every Team (running in Docker) -- Done Dev sandboxes which needs GPU Access, should be spun up by Devops using Docker containers. -- Done GPU Models to be built using Docker -- Done CI Pipelines (DVC, CML git based pipelines, used for training GPU models) [Fingerprint - Done] Grafana for monitoring - Done Postgres cluster for load balancing - Done Metabase Setup - Done Presto - Done Presto With Hive - Done Data collection for JAN 3rd - Done Hive-presto for production Deployment - Done Action Items Migration of mlflow from AWS to GPU server -- In progress Milestone Date Status Dev Sandbox with Automation for managing. 13 OCT 2023 Done JupyterHub with Automation for managing. 20 OCT 2023 Done Gitops Integration for GPU Training & builds 27 OCT 2023 Done Grafana & Monitoring In Review ML Ops DVC CML (CI Pipeline) DVC Studio ML Flow integration Streamlit for ML Web app integration. - [ ] Add Docs on ML Ops Process --- Experiments Traffic Analysis Security - Anomaly Detection Unique Client Identifier (UCID) Telecom Provider identification Action Items 1. Data Collection on Various personal devices (Pcap) @Keerthana Sundarbabu @Surajram Srinivasan 2. EDA @Surajram Srinivasan @Keerthana Sundarbabu 1. Find traffic patterns helpful for uniquely identifying clients. 2. Identify features that can be used to creating an ML model. 3. Build ML Model & UCID Logic Potential Ideas Traffic Analysis in Edge - eg. Device ID Fingerprinting https://arxiv.org/pdf/2211.01963.pdf https://iopscience.iop.org/article/10.1088/1742-6596/1963/1/012046/meta https://www.scitepress.org/Papers/2022/112605/112605.pdf https://scholarworks.unr.edu/bitstream/handle/11714/6021/Aksoy_unr_0139D_12965.pdf?sequence=1&isAllowed=y https://www.sciencedirect.com/science/article/pii/S2665963822001300 https://www.researchgate.net/publication/348756665_Machine_Learning_for_the_Detection_and_Identification_of_Internet_of_Things_IoT_Devices_A_Survey https://mayanknauni.com/?p=4392 https://www.mdpi.com/2078-2489/14/6/320 IoT Cloud Edge Capability Tensorflow based Edge ML Execution Initial Demo done on Edge ML Execution On Hold LLM (Hold) Experiment Opensource LLMs AI Commit - On Prem (Hold) A CPU based llama model, which can generate a summary of Git Commit based on git diff, which can be used by embedUR developers onprem without compromising any sensitive data. This is a Generate AI usecases, and help us to understand what it takes to run LLMs at scale. Body Count Estimation from RSSI Milestone Date Progress Research & Plan (as Doc in Wiki) Done Experiment Runs In Progress References 1. Synaptics Priority: 1. Tiny ML 2. Traffic AnalysisnSecurity, Client Identification 3. Airtel Call In EDA Next Plan: Customization in Superset. Attendance system POC. Coral AI explore