Gagandeep Singh
Staff Data Scientist @ Walmart
๐Ÿš€ Exciting News for Data Practitioners! ๐Ÿš€ Are you tired of the hassle and expense of annotating production data to monitor your machine learning models? Well, Allow me to introduce you to NannyML, the open-source post-deployment model monitoring framework in Python that's about to make your life a whole lot easier. NannyML comes packed with some seriously clever features that are a game-changer for those of us who work with data. What's the best part? It doesn't rely on labeled data! Yes, you heard that right โ€“ all the magic happens with the features you're already capturing while your model is in production. Let's dive into the essence of NannyML: ๐Ÿ” Feature Drift Detection: NannyML helps you identify univariate drift by comparing feature distributions across different chunks of data. Think of these 'chunks' as snapshots of your data โ€“ they can be based on time, size, or number. ๐Ÿ“Š Advanced Analysis: For more complex scenarios involving multivariate drift, NannyML checks the Principal Component Analysis (PCA) data reconstruction error across chunks. In simple terms, it ensures the stability of key components over time, so you can spot any deviations. But wait, there's more: ๐Ÿ“ˆ Model Performance Estimation: NannyML doesn't stop at drift detection; it also estimates your model's performance, helping you maintain top-notch results. ๐Ÿ’ฐ Business Value Estimation: It even assists you in assessing the business value of your models, ensuring they continue to deliver the desired outcomes. ๐Ÿ” Data Quality Monitoring: Keeping an eye on data quality? NannyML has your back, ensuring your data remains reliable and consistent. While NannyML offers a plethora of functionalities, let's focus on feature drift detection, which doesn't require labeled data. Unfortunately, it won't help with concept drift, but that's a small trade-off for the convenience it provides. ๐Ÿ“Š Univariate Drift: Check if feature distributions have shifted between reference and analysis periods. This can be a game-changer for maintaining model accuracy. ๐Ÿ“‰ Output Drift: You can also track shifts in the distribution of predicted classes over time, ensuring your model's predictions remain on point. ๐Ÿ“Š Multivariate Drift: NannyML goes a step further by looking at the overall shift in feature distributions, validating the univariate results. In conclusion, NannyML is a powerful tool that simplifies drift detection for production models. With its intuitive interface and open-source nature, there's no excuse not to use it. Don't let your production models lose their business value due to neglect. Embrace NannyML and ensure your models stay on the right track! ๐Ÿš€
Oct 29, 2023
Marios Kadriu
Data Scientist | Software Engineer | DeepMind Scholar
91% is an incredible percentage. Just goes to demonstrate how crucial it is to keep an eye on the performance of the models during production. Furthermore, there may be additional factors at play in addition to data drift that causes degradation. I am very happy to learn that NannyML an open-source python module exists to assist us in starting a performance monitoring project. Have you utilized NannyML or a different model performance monitoring tool?ย  #datascience #ai #machinelearning โ€ฆsee more
Jul 27, 2023
Anderson Chaves
Lead Data Scientist at EssilorLuxottica
Another cool thing for the day! ๐Ÿ’ก Thanks, Wojtek Kuberski and NannyML! #python #machinelearning #datascience
Nov 10, 2022
Pascal Biese
AI/ML Engineer | LLMs | NLP
Does the pattern below look familiar to you? If not, you can consider yourself incredibly lucky! For everyone else, check out NannyML: NannyML is an open-source python library that allows you to estimate post-deployment model performance (without access to targets), detect data drift, and intelligently link data drift alerts back to changes in model performance. Built for data scientists, NannyML has an easy-to-use interface, interactive visualizations, is completely model-agnostic and currently supports all tabular binary classification use cases. (Source: lnkd.in/e2RjQsQg) #DataScience #MachineLearning #AI #DeepLearning
May 16, 2022
Kishan Savant
Software Engineer @VCollab | Open Source Enthusiast Software Engineer @VCollab | Open Source Enthusiast
Thank you Hakim Elakhrass, Niels Nuyttens and NannyML team for sending this #contributions #swag all the way from Belgium. Really appreciate the note, Hakim. Looking forward to more #opensource #contributions to the wonderful NannyML #modelmonitoring tool. โ€ฆsee more
Apr 8, 2023
๐Ÿš€ Mikkel Jensen
Data Scientist | Developer | ML & AI
Great resources and libraries I have come across recently: NannyML: An open source tool for monitoring models post deployment. I find the ability to estimate performance without labels especially helpful, as we have a one year lag on our labels. It is also possible to detect data/classifier drift. ๐Ž๐ฉ๐ญ๐๐ข๐ง๐ง๐ข๐ง๐ : The go to data binning library. Super useful for discovering interesting intervals in variables, and for binning and preprocessing before applying a logistic regression on top. ๐“๐š๐›๐ƒ๐ƒ๐๐Œ: A new method for modelling tabular data. While I have only briefly browsed the paper, it seems promising for generating tabular data, outperforming both Smote and GANs in most of the tested cases! The paper was published only a week ago, but I'm probably already late to the party talking about this one. What are your favorite new tools? Links in the comments๐Ÿ‘‡ ------------------------------------------------------------------ Talking data science, credit risk, cryptocurrency, software development - Connect & start the conversation! #machinelearningย #pythonย #datascience #opensource
Oct 6, 2022
Olivier Binette
Data Science Research, ML Evaluation & Entity Resolution // PhD Candidate at Duke Data Science Research, ML Evaluation & Entity Resolution // PhD Candidate at Duke
๐Ÿ™… Stop monitoring data drift. Here's what to do instead. After deploying machine learning models, it's essential to ensure that they keep functioning as intended. One common way to do this is by monitoring data drift, or verifying that the data used for predictions is similar to the data the model was trained on. If there is a significant difference, retraining the model on updated data can help. However, monitoring data drift does not indicate if the drift is affecting the model's ability to solve the task it was trained for. An alternative and more focused approach is to continuously monitor the model's generalization performance. This can be done without using any labeled data through clever statistical techniques, such as confidence-based performance estimation and direct loss estimation. ๐Ÿš€ NannyML implements these methods, allowing you to estimate the model's performance on drifted data and focus on what is most important for your task. Does it mean you should really stop monitoring data drift? No. But keep in mind that data drift does not always equate to performance drift, and it's usually more beneficial to focus on the latter. Code below is from lnkd.in/e95kJv-H #machinelearning #ml #ai #drift #performance #evaluation #statistics #datascience #nannyml #datadrift #MLOps
Feb 2, 2023
Roger Kamena, M.Sc.
Senior Data Scientist, AI-Powered Analytics at UKG Senior Data Scientist, AI-Powered Analytics at UKG
Even though sheer experience taught me that data drift alone is not enough to monitor models in production, and that monitoring performance through randomized control trials on out of sample data regularly is a needed practice, itโ€™s nice to read a paper that confirms it with empirical evidence. Moreover NannyML is a nice discovery for me. Canโ€™t wait to test it!
Feb 8, 2023
Louis Owen
AI & Data Science | Yellow.ai
[Estimating Accuracy Without Ground Truth] Getting your ML model to production is not an easy task. Monitoring your deployed ML model is even harder. If you have a business metric that directly correlates to your ML model's performance, then it's good for you. However, what if there are multiple factors that influence your business metric and your ML model is just one of them? What if you want to know how exactly your ML model performing in production measured by technical ML metrics (Accuracy, Precision, Recall, etc)? โœจIntroducing CBPE (Confidence Based Performance Estimation) developed by the amazing team behind NannyML. With CBPE you can estimate the performance of any ML model, without any ground-truth! How it is even possible? It's possible by relying on the prediction output confidence score and under the assumption of the model is well-calibrated and concept drift is not exist. Furthermore, we can also estimate the performance of a regressor ML model with a similar algorithm developed by the NannyML team called DLE (Direct Loss Estimation)! Curious to learn more? You can refer to the following articles for more information! ๐Ÿ“Œ lnkd.in/gJuPBQcb ๐Ÿ“Œhttps://lnkd.in/gsbG3HgE ๐Ÿ“Œhttps://lnkd.in/gh3A6iUJ #artificialintelligence #machinelearning #datascience #sharingiscaring
Feb 7, 2023
Smriti Mishra
Data Science & Engineering at Adage
How can you know if your ML models did not fail silently after deployment? NannyML was trending on GitHub (it was also #3 on Product Hunt). It's a fantastic Open-Source Python Library for detecting silent ML model failure! Key aspects include: ๐Ÿ”นEstimate the performance of a deployed ML model in the absence of target data! Detect multivariate and univariate data drift robustly!ย  ๐Ÿ”นDrop-in link performance due to drift in specific aspectsย  ๐Ÿ”นIt is compatible with all categorization models.ย  ๐Ÿ”นBuilt-in performance and data drift visualisation pip install nannyml Check out the open-source project and give it a star to keep up with future updates like forthcoming regression support! lnkd.in/ej_VtyeP #technology #artificialintelligence #python #data #programming #machinelearning
Jun 20, 2022
๐Ÿ‘‹ Simon Stiebellehner
Building ML Platforms | Engineering Manager | ๐Ÿ‘จโ€๐Ÿซ University Lecturer | Advisor
Deployed a #MachineLearning model to production? Now you're having sleepless nights due to ๐—ถ๐—บ๐—บ๐—ถ๐—ป๐—ฒ๐—ป๐˜ ๐—ฝ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—ฑ๐—ฒ๐—ด๐—ฟ๐—ฎ๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป? ๐Ÿ˜ฐ ๐—š๐—ฒ๐˜ ๐—ฎ ๐—ก๐—ฎ๐—ป๐—ป๐˜† ๐—ณ๐—ผ๐—ฟ ๐˜†๐—ผ๐˜‚๐—ฟ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น! ๐—ก๐—ฎ๐—ป๐—ป๐˜†๐— ๐—Ÿ (by NannyML) is an ๐—ข๐—ฝ๐—ฒ๐—ป ๐—ฆ๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ ๐—ฃ๐˜†๐˜๐—ต๐—ผ๐—ป ๐—ฝ๐—ฎ๐—ฐ๐—ธ๐—ฎ๐—ด๐—ฒ that helps you estimate model performance in production without access to targets! ๐š™๐š’๐š™ ย ๐š’๐š—๐šœ๐š๐šŠ๐š•๐š• ย ๐š—๐šŠ๐š—๐š—๐šข๐š–๐š• โžก๏ธย ๐——๐—ฒ๐˜๐—ฒ๐—ฐ๐˜ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ฑ๐—ฟ๐—ถ๐—ณ๐˜ of deployed models w/o targets(!) โžก๏ธย Configure ๐—ฎ๐—น๐—ฒ๐—ฟ๐˜๐˜€ โžก๏ธย Link data drift back to ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น ๐—ฐ๐—ต๐—ฎ๐—ป๐—ด๐—ฒ๐˜€ โžก๏ธย ๐— ๐—ผ๐—ฑ๐—ฒ๐—น-๐—ฎ๐—ด๐—ป๐—ผ๐˜€๐˜๐—ถ๐—ฐ โžก๏ธย Comes with a neat set of ๐˜ƒ๐—ถ๐˜€๐˜‚๐—ฎ๐—น๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป๐˜€ Check it out and star the repo to stay up-to-date! โญ ๐—š๐—ถ๐˜๐—›๐˜‚๐—ฏ: lnkd.in/e-_YSDsh --- Follow me for curated, high-quality content on productionizingย #MachineLearningย andย #MLOpsย . Letโ€™s takeย #DataScienceย from Notebook to Production!
May 14, 2022
Joรฃo Maia
Data Scientist | Machine Learning | Python | Keras
Thanks Wojtek, I recently implemented your solution in my personal project. And it's really really usefull to identify bad features, even going through other variable selection methods and help me to prevent some failures.
Jul 27, 2023
NLP Logix
3,527 followers
Great read alert: this article from NannyML discusses a recent study by MIT, Harvard, The University of Monterrey, and other top institutions-- in regard to models degrading over time. lnkd.in/g49zhe5a #ml #ai #mlmodels #datascienceisateamsport
Apr 18, 2023
Gagandeep Singh
Staff Data Scientist @ Walmart
๐Ÿš€ Exciting News for Data Practitioners! ๐Ÿš€ Are you tired of the hassle and expense of annotating production data to monitor your machine learning models? Well, Allow me to introduce you to NannyML, the open-source post-deployment model monitoring framework in Python that's about to make your life a whole lot easier. NannyML comes packed with some seriously clever features that are a game-changer for those of us who work with data. What's the best part? It doesn't rely on labeled data! Yes, you heard that right โ€“ all the magic happens with the features you're already capturing while your model is in production. Let's dive into the essence of NannyML: ๐Ÿ” Feature Drift Detection: NannyML helps you identify univariate drift by comparing feature distributions across different chunks of data. Think of these 'chunks' as snapshots of your data โ€“ they can be based on time, size, or number. ๐Ÿ“Š Advanced Analysis: For more complex scenarios involving multivariate drift, NannyML checks the Principal Component Analysis (PCA) data reconstruction error across chunks. In simple terms, it ensures the stability of key components over time, so you can spot any deviations. But wait, there's more: ๐Ÿ“ˆ Model Performance Estimation: NannyML doesn't stop at drift detection; it also estimates your model's performance, helping you maintain top-notch results. ๐Ÿ’ฐ Business Value Estimation: It even assists you in assessing the business value of your models, ensuring they continue to deliver the desired outcomes. ๐Ÿ” Data Quality Monitoring: Keeping an eye on data quality? NannyML has your back, ensuring your data remains reliable and consistent. While NannyML offers a plethora of functionalities, let's focus on feature drift detection, which doesn't require labeled data. Unfortunately, it won't help with concept drift, but that's a small trade-off for the convenience it provides. ๐Ÿ“Š Univariate Drift: Check if feature distributions have shifted between reference and analysis periods. This can be a game-changer for maintaining model accuracy. ๐Ÿ“‰ Output Drift: You can also track shifts in the distribution of predicted classes over time, ensuring your model's predictions remain on point. ๐Ÿ“Š Multivariate Drift: NannyML goes a step further by looking at the overall shift in feature distributions, validating the univariate results. In conclusion, NannyML is a powerful tool that simplifies drift detection for production models. With its intuitive interface and open-source nature, there's no excuse not to use it. Don't let your production models lose their business value due to neglect. Embrace NannyML and ensure your models stay on the right track! ๐Ÿš€
Oct 29, 2023
Marios Kadriu
Data Scientist | Software Engineer | DeepMind Scholar
91% is an incredible percentage. Just goes to demonstrate how crucial it is to keep an eye on the performance of the models during production. Furthermore, there may be additional factors at play in addition to data drift that causes degradation. I am very happy to learn that NannyML an open-source python module exists to assist us in starting a performance monitoring project. Have you utilized NannyML or a different model performance monitoring tool?ย  #datascience #ai #machinelearning โ€ฆsee more
Jul 27, 2023
Anderson Chaves
Lead Data Scientist at EssilorLuxottica
Another cool thing for the day! ๐Ÿ’ก Thanks, Wojtek Kuberski and NannyML! #python #machinelearning #datascience
Nov 10, 2022
Pascal Biese
AI/ML Engineer | LLMs | NLP
Does the pattern below look familiar to you? If not, you can consider yourself incredibly lucky! For everyone else, check out NannyML: NannyML is an open-source python library that allows you to estimate post-deployment model performance (without access to targets), detect data drift, and intelligently link data drift alerts back to changes in model performance. Built for data scientists, NannyML has an easy-to-use interface, interactive visualizations, is completely model-agnostic and currently supports all tabular binary classification use cases. (Source: lnkd.in/e2RjQsQg) #DataScience #MachineLearning #AI #DeepLearning
May 16, 2022
Kishan Savant
Software Engineer @VCollab | Open Source Enthusiast Software Engineer @VCollab | Open Source Enthusiast
Thank you Hakim Elakhrass, Niels Nuyttens and NannyML team for sending this #contributions #swag all the way from Belgium. Really appreciate the note, Hakim. Looking forward to more #opensource #contributions to the wonderful NannyML #modelmonitoring tool. โ€ฆsee more
Apr 8, 2023
๐Ÿš€ Mikkel Jensen
Data Scientist | Developer | ML & AI
Great resources and libraries I have come across recently: NannyML: An open source tool for monitoring models post deployment. I find the ability to estimate performance without labels especially helpful, as we have a one year lag on our labels. It is also possible to detect data/classifier drift. ๐Ž๐ฉ๐ญ๐๐ข๐ง๐ง๐ข๐ง๐ : The go to data binning library. Super useful for discovering interesting intervals in variables, and for binning and preprocessing before applying a logistic regression on top. ๐“๐š๐›๐ƒ๐ƒ๐๐Œ: A new method for modelling tabular data. While I have only briefly browsed the paper, it seems promising for generating tabular data, outperforming both Smote and GANs in most of the tested cases! The paper was published only a week ago, but I'm probably already late to the party talking about this one. What are your favorite new tools? Links in the comments๐Ÿ‘‡ ------------------------------------------------------------------ Talking data science, credit risk, cryptocurrency, software development - Connect & start the conversation! #machinelearningย #pythonย #datascience #opensource
Oct 6, 2022
Olivier Binette
Data Science Research, ML Evaluation & Entity Resolution // PhD Candidate at Duke Data Science Research, ML Evaluation & Entity Resolution // PhD Candidate at Duke
๐Ÿ™… Stop monitoring data drift. Here's what to do instead. After deploying machine learning models, it's essential to ensure that they keep functioning as intended. One common way to do this is by monitoring data drift, or verifying that the data used for predictions is similar to the data the model was trained on. If there is a significant difference, retraining the model on updated data can help. However, monitoring data drift does not indicate if the drift is affecting the model's ability to solve the task it was trained for. An alternative and more focused approach is to continuously monitor the model's generalization performance. This can be done without using any labeled data through clever statistical techniques, such as confidence-based performance estimation and direct loss estimation. ๐Ÿš€ NannyML implements these methods, allowing you to estimate the model's performance on drifted data and focus on what is most important for your task. Does it mean you should really stop monitoring data drift? No. But keep in mind that data drift does not always equate to performance drift, and it's usually more beneficial to focus on the latter. Code below is from lnkd.in/e95kJv-H #machinelearning #ml #ai #drift #performance #evaluation #statistics #datascience #nannyml #datadrift #MLOps
Feb 2, 2023
Roger Kamena, M.Sc.
Senior Data Scientist, AI-Powered Analytics at UKG Senior Data Scientist, AI-Powered Analytics at UKG
Even though sheer experience taught me that data drift alone is not enough to monitor models in production, and that monitoring performance through randomized control trials on out of sample data regularly is a needed practice, itโ€™s nice to read a paper that confirms it with empirical evidence. Moreover NannyML is a nice discovery for me. Canโ€™t wait to test it!
Feb 8, 2023
Louis Owen
AI & Data Science | Yellow.ai
[Estimating Accuracy Without Ground Truth] Getting your ML model to production is not an easy task. Monitoring your deployed ML model is even harder. If you have a business metric that directly correlates to your ML model's performance, then it's good for you. However, what if there are multiple factors that influence your business metric and your ML model is just one of them? What if you want to know how exactly your ML model performing in production measured by technical ML metrics (Accuracy, Precision, Recall, etc)? โœจIntroducing CBPE (Confidence Based Performance Estimation) developed by the amazing team behind NannyML. With CBPE you can estimate the performance of any ML model, without any ground-truth! How it is even possible? It's possible by relying on the prediction output confidence score and under the assumption of the model is well-calibrated and concept drift is not exist. Furthermore, we can also estimate the performance of a regressor ML model with a similar algorithm developed by the NannyML team called DLE (Direct Loss Estimation)! Curious to learn more? You can refer to the following articles for more information! ๐Ÿ“Œ lnkd.in/gJuPBQcb ๐Ÿ“Œhttps://lnkd.in/gsbG3HgE ๐Ÿ“Œhttps://lnkd.in/gh3A6iUJ #artificialintelligence #machinelearning #datascience #sharingiscaring
Feb 7, 2023
Smriti Mishra
Data Science & Engineering at Adage
How can you know if your ML models did not fail silently after deployment? NannyML was trending on GitHub (it was also #3 on Product Hunt). It's a fantastic Open-Source Python Library for detecting silent ML model failure! Key aspects include: ๐Ÿ”นEstimate the performance of a deployed ML model in the absence of target data! Detect multivariate and univariate data drift robustly!ย  ๐Ÿ”นDrop-in link performance due to drift in specific aspectsย  ๐Ÿ”นIt is compatible with all categorization models.ย  ๐Ÿ”นBuilt-in performance and data drift visualisation pip install nannyml Check out the open-source project and give it a star to keep up with future updates like forthcoming regression support! lnkd.in/ej_VtyeP #technology #artificialintelligence #python #data #programming #machinelearning
Jun 20, 2022
๐Ÿ‘‹ Simon Stiebellehner
Building ML Platforms | Engineering Manager | ๐Ÿ‘จโ€๐Ÿซ University Lecturer | Advisor
Deployed a #MachineLearning model to production? Now you're having sleepless nights due to ๐—ถ๐—บ๐—บ๐—ถ๐—ป๐—ฒ๐—ป๐˜ ๐—ฝ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ ๐—ฑ๐—ฒ๐—ด๐—ฟ๐—ฎ๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป? ๐Ÿ˜ฐ ๐—š๐—ฒ๐˜ ๐—ฎ ๐—ก๐—ฎ๐—ป๐—ป๐˜† ๐—ณ๐—ผ๐—ฟ ๐˜†๐—ผ๐˜‚๐—ฟ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น! ๐—ก๐—ฎ๐—ป๐—ป๐˜†๐— ๐—Ÿ (by NannyML) is an ๐—ข๐—ฝ๐—ฒ๐—ป ๐—ฆ๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ ๐—ฃ๐˜†๐˜๐—ต๐—ผ๐—ป ๐—ฝ๐—ฎ๐—ฐ๐—ธ๐—ฎ๐—ด๐—ฒ that helps you estimate model performance in production without access to targets! ๐š™๐š’๐š™ ย ๐š’๐š—๐šœ๐š๐šŠ๐š•๐š• ย ๐š—๐šŠ๐š—๐š—๐šข๐š–๐š• โžก๏ธย ๐——๐—ฒ๐˜๐—ฒ๐—ฐ๐˜ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ฑ๐—ฟ๐—ถ๐—ณ๐˜ of deployed models w/o targets(!) โžก๏ธย Configure ๐—ฎ๐—น๐—ฒ๐—ฟ๐˜๐˜€ โžก๏ธย Link data drift back to ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น ๐—ฐ๐—ต๐—ฎ๐—ป๐—ด๐—ฒ๐˜€ โžก๏ธย ๐— ๐—ผ๐—ฑ๐—ฒ๐—น-๐—ฎ๐—ด๐—ป๐—ผ๐˜€๐˜๐—ถ๐—ฐ โžก๏ธย Comes with a neat set of ๐˜ƒ๐—ถ๐˜€๐˜‚๐—ฎ๐—น๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป๐˜€ Check it out and star the repo to stay up-to-date! โญ ๐—š๐—ถ๐˜๐—›๐˜‚๐—ฏ: lnkd.in/e-_YSDsh --- Follow me for curated, high-quality content on productionizingย #MachineLearningย andย #MLOpsย . Letโ€™s takeย #DataScienceย from Notebook to Production!
May 14, 2022
Joรฃo Maia
Data Scientist | Machine Learning | Python | Keras
Thanks Wojtek, I recently implemented your solution in my personal project. And it's really really usefull to identify bad features, even going through other variable selection methods and help me to prevent some failures.
Jul 27, 2023
NLP Logix
3,527 followers
Great read alert: this article from NannyML discusses a recent study by MIT, Harvard, The University of Monterrey, and other top institutions-- in regard to models degrading over time. lnkd.in/g49zhe5a #ml #ai #mlmodels #datascienceisateamsport
Apr 18, 2023
Logo