News
This project instruments complex software production environments to continuously monitor the DevSecOps pipeline and use that data to continuously update estimates for cost, schedule, and quality.
Additional Sites Directory The following web properties are part of the SEI's work. Some are restricted to approved users, but most are open wikis that offer technical details about ongoing work.
SEI work such as technical reports, logos, articles, presentations, and methods is the intellectual property of Carnegie Mellon University. To request permission to use Carnegie Mellon copyrighted ...
The ML system should neither do the wrong thing when presented with adversarial input nor reveal sensitive information about the training data during its operation.
State-of-the-art ML models can produce inaccurate inferences in scenarios where humans would reasonably expect high accuracy.
SAFIR will improve architecture-led safety assessment processes by delivering new tool-supported analysis and code generation capabilities to designers.
Our tool can identify which decompiled functions are likely to be semantically equivalent to the original binary function and which are unlikely to be equivalent.
The potential of quantum computing, especially near term, is not going to be realized without close integration with state-of-the-art classical computing. Universal gate (UG) quantum computers share ...
The SEI is taking the initiative to develop an AI engineering discipline that will lay the groundwork for establishing the practices, processes, and knowledge to build new generations of AI solutions.
Dio De Niz Technical Director, Assuring Cyber-Physical Systems Read publications authored by Dio De Niz.
An automated conformance checker that can be integrated into the continuous integration workflow… This technology will correctly identify design nonconformances with precision greater than 90%.
Attacks on machine learning (ML) systems can make them learn the wrong thing, do the wrong thing, or reveal sensitive information. Train, But Verify protects ML systems by training them to act against ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results