Miami AI Club’s Proposal to NIST: A Retrospective

On October 30th, 2023, the President of The United States issued a landmark Executive Order (Executive Order 14110) to ensure America was leading the way in seizing the promise and managing the risk of Artificial Intelligence (AI). The Executive Order directed the National Institute of Standards and Technology (NIST) to undertake an initiative for evaluating and auditing capabilities relating to AI technologies.

The Miami AI Club (MAIC), comprising some of the leading international minds on AI, embarked on a mission to create a positive impact through AI. In response to the NIST AI Executive Order, the MAIC global experts collectively shared with the NIST leadership their strategies and guidelines to mitigate the risk of AI-Generated Synthetic Content (AI-GSC) and to enable the deployment of safe, secure, and trustworthy systems.

Synthetic content in artificial intelligence refers to content, such as text, images, or audio, generated by AI systems rather than being created by humans. This can be achieved through various AI techniques, including natural language processing (NLP), computer vision, and generative models, including Large Language Models (LLMs).

The potential uses of synthetic content – such as images, videos, audio, or text – generated by AI brought profound implications, such as ethical, legal, and security issues and concerns. Ethically, there was a risk of losing society’s trust and the integrity of information as the creation and dissemination of inaccurate content increased. Legally, these creations raised concerns about intellectual property rights, consents, and liability, especially when synthetic content was used without authorization.

Regarding security, synthetic content could be weaponized for misinformation campaigns, potentially undermining national security, influencing political processes, and manipulating financial markets. Reducing the risks associated with synthetic content in artificial intelligence could have several positive impacts, including enhancing trust and credibility in AI-generated information by minimizing misinformation, deepfakes, and biased content.

The MAIC proposed a suite of effective and practical approaches to mitigate the outlined risks associated with AI-GSC. Their strategies and guidelines were centered on enhancing content integrity through a comprehensive focus on labeling, detecting, and testing & auditing practices. These areas collectively formed the cornerstone of their approach to ensure the responsible management and verification of digital content.

While all three areas were equally important, the MAIC experts weighted section 3 more than the others. MAIC experts believed that auditing and testing were more feasible to implement. The proposal was delivered to NIST, marking a significant milestone in the journey towards responsible and secure use of AI.

Here is the full proposal:

Miami AI Club Global Experts Team:

Nima Schei, MD [Linkedin] is an AI entrepreneur and founder of Hummingbirds AI and BEL Research. He’s the creator of BEL, the first machines that make decisions based on emotions. He’s the founder of the Miami AI Club and leading this project.

Libia F. Scheller, PhD, MBA [Linkedin] is the Global Head of Oncology Strategic Alliances at Bayer. Holds 7 CRADAs with the NIH. Is a board member, advisor, &  investor of companies utilizing AI in Healthcare.

Felicita Sandoval, MSC, CFE [Linkedin] is a cybersecurity professional specializing in Governance, Risk, and Compliance (GRC), with a focus on AI risks associated with data privacy, underscoring her commitment to developing secure and ethical AI systems. 

Erika Twani, MBA [Linkedin] is a Miami-based best-selling author, Oracle and Microsoft veteran, and software engineer specialized in the use of AI in education.

Brian Fricke, CISSP, CISM [Linkedin] is the CISO of the City National Bank of Florida in Miami. He has been establishing innovative Information Security Programs for over 15 years in Military, Government, and Financial Institutions.

Cyrus Hodes, MPA [Linkedin] is the lead at the SAFE (safety of generative AI) project at the Global Partnership on AI. He co-founded Stability AI and Infinitio.AI (1st blockchain-based Gen AI model).

William Mendez, MSC  [Linkedin] is the former CISO at the City of Miami- An experienced vCISO & AI-driven cybersecurity expert. Pioneering adaptable, AI-integrated strategies for robust digital defense.

Noel J. Guillama-Alvarez [Linkedin] is a nationally recognized expert on health information technology. A lifetime Entrepreneur, with 35 years of experience he has founded and taken 6 companies public. Holds over two dozen patents in healthcare IT, ML/AI/AR, and blockchain.

Michael Mylrea, PhD [Linkedin] is a cybersecurity leader and technologist with a 15-year track record leading cybersecurity, governance, risk and compliance (GRC) and applied AI/ML innovation. Distinguished Fellow at the University of Miami Institute for Data Science & Computing.

Dan Barsky, JD  [Linkedin] is a partner at Holland & Knight LLP. Co-Director of Startup Clinic and Adjunct Professor at the University of Miami School of Law.

Paul Plofchan CIPP US [Linkedin] is a managing principal at Grimberg, Johnson & McQue, Helping organizations drive commercial success, manage risk, and shape the external environment. Former Chief Privacy Officer at ADT and Director of Government Affairs at Pfizer.

Ivan Dynamo De Jesus [Linkedin] is a finance and healthcare professional, founder of AXEN Health Inc. & Cynari Inc., and Partner at Impact Invest Corp.

Mandeep Maini, M. ED, MBA [Linkedin] has a degree in AI from Harvard University and years of experience in healthcare technology. She now helps healthcare organizations prepare for and adapt to AI.

Pedro A. Santos, MS [Linkedin] is the executive director of emerging technologies at Miami Dade College. An educator and tech visionary in academia, leveraging AI for student growth. Skilled in leading tech integration, with a focus on AI-enhanced learning and development.

Full proposal :