Share Tech Transforms
Share to email
Share to Facebook
Share to X
By Carolyn Ford
5
1111 ratings
The podcast currently has 85 episodes available.
Jason Miller is the Executive Editor of Federal News Network and has covered the federal technology space over the course of five Presidential administrations. He brings his wealth of knowledge as he joins Tech Transforms to talk about AI, the top things government agencies are working towards this year and his predictions around FedRAMP changes. Jason also pulls on his decades of experience as he discusses events that changed the nation's approach to cybersecurity and the longstanding need to have data that is better, faster and easier to use.
Key TopicsJason expressed a clear conviction that technology issues are largely immune to political fluctuation and are a continuity in government agendas. Reflecting on his experience across five administrations, he noted that the foundational technological discussions, such as cloud adoption, cybersecurity enhancement and overall IT improvement are fundamentally preserved through transitions in political leadership. He highlighted that the drive to enhance government IT is typically powered by the resilience and dedication of public servants, who generally carry on valuable reforms and initiatives regardless of the sitting administration's politics. These individuals are essential to sustaining progress and ensuring that technology remains a key priority for effective governance.
Federal IT Policies Consistency: "No one comes in and says, I'm against AI, or cloud is bad, move back on premise, or cybersecurity, defund cybersecurity. I think those are the issues that stay the same." — Jason MillerExecutive Orders and AI AdoptionAddressing the specifics of executive orders, particularly those influencing the implementation and development of artificial intelligence (AI), Jason examined their historical persistence and their potential to shape operational practices in the government sector. He and Mark discussed how the stability of AI-related orders through various administrations is indicative of a broader governmental consensus on the integral role AI holds in modernizing federal operations. Despite changes in leadership, the incoming officials frequently uphold the momentum established by their predecessors when it comes to leveraging AI. Indicating a shared, bipartisan recognition of its strategic importance to the government's future capabilities and efficiencies.
Cybersecurity Evolution: Zero Trust Principles and Network Security Challenges in Federal AgenciesZero Trust and Cybersecurity BudgetingDuring the podcast, Carolyn and Jason delve into the current trends and expectations for federal cybersecurity advancements, with a particular focus on zero trust architecture. Their discussion acknowledged that agencies are on a tight schedule to meet the guidelines set forth by the Office of Management and Budget, which has highlighted 2025 as the target year for civilian agencies to embrace specific zero trust requirements. While the Department of Defense has until 2027.
Moving past the traditional perimeter defense model, zero trust principles necessitate an ongoing and multifaceted approach to security, which includes sizable budget implications. Jason underscored the importance of the 2024 fiscal year. Noting it as the first time federal budgets are being crafted with clear delineations for zero trust capabilities. This shift in focus is exemplified by the rollout of endpoint detection and response (EDR) technologies. Vital components in this architecture that ensure rigorous monitoring and real-time responsiveness to cyber threats.
Understanding the Cybersecurity EvolutionJason underscored the complexities of network security as federal entities confront the expanding cybersecurity landscape. Highlighted was the layered approach needed to fortify cybersecurity, starting with IAM. This segment illuminated the government's drive to update antiquated systems with modern identification and credentialing processes to better regulate access control. The discussion spilled into a critical analysis of data layer security, emphasizing the necessity for agencies to marshal their applications and data against unauthorized access. Furthermore, Jason hinted at the broader horizon of security measures, which now includes OT and IoT devices. The intertwining of these technologies with standard IT infrastructure adds layers of complexity for security protocols. The conversation shined a light on the massive task that lies ahead as agencies work to comprehend and safeguard the expanded network perimeters and develop strategies to encapsulate a variety of devices under a comprehensive cybersecurity shell.
The Evolution of AI in Cybersecurity: "We can take data that was 3 years ago or data over the last 3 years and look for trends that we can then use for our future. I think what they're looking for now is more real time, more immediate, especially if you think about, like, cybersecurity." — Jason MillerInnovations and Challenges in Tech ReportingTimeliness in Problem ReportingJason believes that being proactive is vital when it comes to identifying and addressing potential issues within federal agencies. He highlighted that by the time an oversight report, such as those from the Government Accountability Office or an Inspector General's office, is made public, the concerned agency has likely been aware of the issue and has already taken steps to address it. This underlines the criticality of immediate agency reactions to problems. In the context of these reports, Jason suggested reading the agency's responses first. They provide the most current view of what's happening and the actions taken, often making them more newsworthy than the findings of the report itself.
ACT-IAC and AFCEA Gatherings Key to Cybersecurity Evolution DialogueWithout specifically endorsing any one event, Jason acknowledged the importance of various industry gatherings where government and industry leaders convene to discuss pressing topics. He emphasized the ACT-IAC and the AFCEA events as beneficial arenas that enable him to engage deeply in conversations that can lead to actionable insights and meaningful connections. He also mentioned that these events provide an opportunity to interact with federal agency leaders outside the formal constraints of an office setting. This can lead to more open and candid exchanges of ideas and experiences within the government tech community. The ACT-IAC conferences and AFCEA's branch-specific IT days, according to Jason, yield particularly high-value discussions that contribute to both immediate news items and broader thematic reporting.
Probing the Cybersecurity EvolutionJason's Insight on Federal Tech TrendsJason brings a wealth of knowledge specific to federal government technology trends. He highlights AI as a prevalent topic within current discussions. His emphasis on AI signifies the shift from its former buzzword status to a fundamental tool in federal IT arsenals, especially regarding applications in cybersecurity and immediate data analysis. Jason notes that this mirrors the pattern of past tech trends in the industry, where initial hype evolves into concrete implementations. The conversation underscores the fact that while AI is gaining traction in strategic planning and operations, it is critical to discern genuine AI adoption from mere marketing.
AI Shift Reflects Cybersecurity Evolution and Predictive Technology Integration in Government OperationsAs the conversation progresses, Jason, Carolyn and Mark explore how the vigorous enthusiasm around AI aligns with patterns observed during the advent of previous technologies. The cycle of tech trends typically begins with a surge of excitement and culminates with the practical integration of technology within government operations. Jason points out that although AI is the topic du jour, the government's drive towards embracing real-time and predictive capabilities of AI is indicative of its elevated role compared to earlier technology hypes. This shift spotlights AI's increasing value in enhancing operational efficiency and decision-making processes across various federal agencies.
Appreciating Government Employees: “There's so many great people who work for the government who want to do the right thing or trying to do the right thing, that work hard every day, that don't just show up at 9 and leave at 5 and take a 2 hour lunch." — Jason MillerThe FedRAMP Overhaul DebateRethinking FedRAMPFedRAMP's reform was a critical topic addressed by Jason, who noted industry-wide eagerness for revising the program's long-standing framework. Not only has the cost of compliance become a pressing issue for businesses aiming to secure their cloud solutions, but the time-consuming journey through the certification labyrinth has compounded their challenges. Advancements in technology and a shift towards better automation capabilities have supported the argument for modernizing FedRAMP. The white paper presented by the General Services Administration responded to such pressures with the goal of making the process more efficient. Jason also mentioned a legislative angle with Representative Connolly's involvement, marking the congressional ear tuned to the private sector's concerns about the program's current state.
Predicting the Future of FedRAMPMoving forward, while discussing federal efforts to enhance cloud security protocols, Jason described the nuances in predicting FedRAMP's evolution. He cited the Department of Defense's actions as a positive development, in which they suggested frameworks for accepting FedRAMP certifications reciprocally, depending on security levels. This reciprocity aims to foster mutual trust and reduce redundancy in security validations. However, Jason exercised caution in providing a timeline by which tangible reforms might materialize for businesses pursuing FedRAMP accreditations. Despite the uncertainties, he recognized automation, specifically via OSCAL, as a potential accelerant for the much-needed reform, bringing about quicker, more cost-effective compliance processes.
Tracking the Cybersecurity Evolution: From 2006 Data Breach to Contemporary Data Protection StrategiesAnalyzing the Cybersecurity Evolution Post-2006 Veterans Affairs Data MishandlingJason provided context on the evolution of cybersecurity. Drawing from an incident in 2006 when the Veterans Affairs department mishandled tapes containing sensitive data of millions of veterans. This episode, he explained, was an eye-opener, underscoring the importance of data security within the federal government. The aftermath was a pivot towards greater openness about cybersecurity issues. Moving away from a more secretive posture to one where sharing of information became essential for strengthening overall security. What we observe now is a more concerted effort within government circles to collaborate, engage with industry partners, and cultivate a proactive stance on cybersecurity threats, with agencies actively communicating about and learning from security incidents.
Emphasizing Data ProtectionThe conversation highlighted the criticality of data protection as it has become the nucleus of many governmental operations and decision-making processes. Since the intrusion into the Office of Personnel Management's records, there has been a palpable shift, gearing towards more robust data safeguards. Jason pointed out how being well-informed about such dynamics is crucial. Entailing an immersion in various activities such as attending industry events, networking with key players, and thorough analysis of inspector general and Governmental Accountability Office reports. Such proactive engagement helps in staying abreast of the current and emerging landscape of federal technology, especially the methodologies and strategies deployed to protect the troves of sensitive data managed by government entities.
About Our GuestJason Miller has served as executive editor of Federal News Network since 2008. In this role, he directs the news coverage on all federal issues. He has also produced several news series – among them on whistleblower retaliation at the Small Business Association, the impact of the Technology Modernization Fund and the ever-changing role of agency CIOs.
Episode LinksCan you spot a deepfake? Will AI impact the election? What can we do individually to improve election security? Hillary Coover, one of the hosts of the It’s 5:05! Podcast, and Tracy Bannon join for another So What? episode of Tech Transforms to talk about all things election security. Listen in as the trio discusses cybersecurity stress tests, social engineering, combatting disinformation and much more.
Key TopicsHillary Coover brings attention to the pivotal distinction between misinformation and disinformation. Misinformation is the spread of false information without ill intent, often stemming from misunderstandings or mistakes. On the other hand, disinformation is a more insidious tactic involving the intentional fabrication and propagation of false information, aimed at deceiving the public. Hillary emphasizes that recognizing these differences is vital in order to effectively identify and combat these issues. She also warns about the role of external national entities that try to amplify societal divisions by manipulating online conversations to serve their own geopolitical aims.
Understanding Disinformation and Misinformation: "Disinformation is is a deliberate attempt to falsify information, whereas misinformation is a little different." — Hillary CooverThe Challenges of Policing Social Media ContentThe episode dives into the complexities of managing content on social media platforms, where Tracy Bannon and Hillary discuss the delicate balance required to combat harmful content without infringing on freedom of speech or accidentally suppressing valuable discourse. As part of this discussion, they mention their intention to revisit and discuss the book "Ministry of the Future," which explores related themes. Suggesting that this novel offers insights that could prove valuable in understanding the intricate challenges of regulating social media. There is a shared concern about the potential for an overly robust censorship approach to hinder the dissemination of truth as much as it limits the spread of falsehoods.
The Erosion of Face-to-Face Political DialogueThe conversation transitions to the broader societal implications of digital dependency. Specifically addressing how the diminishment of community engagement has led individuals to increasingly source news and discourse from digital platforms. This shift towards isolationistic tendencies, amplified by the creation of digital echo chambers, results in a decline of in-person political discussions. As a result, there is growing apprehension about the future of political discourse and community bonds, with Hillary and Tracy reflecting on the contemporary rarity of open, face-to-face political conversations that generations past traditionally engaged in.
The Shadow of Foreign Influence and Election IntegrityChallenges in India’s Multiparty Electoral SystemIn the course of the discussion, the complexity of India's electoral system, with its multitude of political parties, is presented as an example that underlines the difficulty in verifying information. The expansive and diversified political landscape poses a formidable challenge in maintaining the sanctity of the electoral process. The capability of AI to produce deepfakes further amplifies the risks associated with distinguishing genuine content from fabricated misinformation. The podcast conversation indicates that voters, particularly in less urbanized areas with lower digital literacy levels, are especially vulnerable to deceptive content. This magnifies the potential for foreign entities to successfully disseminate propaganda and influence election outcomes.
Election Integrity and AI: "Misinformation and disinformation, they're not new. The spread of that is certainly not new in the context of elections. But the AI technology is exacerbating the problem, and and we as a society are not keeping up with our adversaries and social media manipulation. Phishing and social engineering attacks enhanced by AI technologies are really, really stressing stressing the system and stressing the election integrity." — Hillary CooverCountering Foreign Disinformation Campaigns in the Digital AgeWith a focus on the discreet yet potent role of foreign intervention in shaping narratives, Hillary spotlights an insidious aspect of contemporary political warfare, the exploitation of media and digital platforms to sway public perception. This influence is not just limited to overt propaganda but extends to subtler forms of manipulation that seed doubt and discord among the electorate. As the podcast discussion suggests, the consequences of such foreign-backed campaigns could be significant, leading to polarization and undermining the foundational principles of democratic debate and decision-making. The potential for these campaigns to carry a vengeful weight in political discourse warrants vigilance and proactive measures to defend against such incursions into informational autonomy.
Addressing the Impact of Disinformation Through AI's Historical Representation BiasTackling Disinformation: AI Bias and the Misrepresentation of Historical FiguresThe discussion on AI bias steers toward concrete instances where AI struggles, as Tracy brings forth examples that illustrate the inaccuracies that can arise when AI models generate historical figures. Tracy references a recent episode where Google's Gemini model was taken offline after it incorrectly generated images of German soldiers from World War 2 that did not match historical records. Similar errors occurred when the AI produced images of America's Founding Fathers that featured individuals of different racial backgrounds that did not reflect the true historical figures. These errors are attributed not to malicious intent by data scientists but to the data corpus used in training these models. This segment underscores the significant issues that can result from AI systems when they misinterpret or fail to account for historical contexts.
The Necessity of Addressing AI BiasContinuing the conversation, Hillary emphasizes the importance of recognizing and addressing the biases in AI. She advocates for the vital need to understand historical nuances to circumvent such AI missteps. Both Hillary and Tracy discuss how biased news and misinformation can influence public opinion and election outcomes. This brings to light the critical role historical accuracy plays in the dissemination of information. They point out that to prevent biased AI-generated data from misleading the public, a combination of historical education and conscious efforts to identify and address these biases is necessary. The recognition of potential AI bias leads to a deeper discussion about ensuring information accuracy. Particularly with regard to historical facts that could sway voter perception during elections. Tracy and Hillary suggest that addressing these challenges is not just a technological issue but also an educational one. Where society must be taught to critically evaluate AI-generated content.
The Challenge of Community Scale Versus Online InfluenceCombating Disinformation: The Struggle to Scale Community Engagement Versus Digital Platforms' ReachThe dialogue acknowledges the difficulty of scaling community engagement in the shadow of digital platforms' expansive reach. Hillary and Tracy delve into the traditional benefits of personal interactions within local communities, which often contribute to more nuanced and direct exchange of ideas. They compare this to the convenience and immediacy of online platforms, which, while enabling widespread dissemination of information, often lack the personal connection and accountability that face-to-face interactions foster. The challenge underscored is how to preserve the essence of community in an age where online presence has become overpowering and sometimes distancing.
Navigating the Truth in the Digital Age: “Don't get your news from social media. And then another way, like, I just do a gut check for myself. [...] I need to go validate." — Hillary CooverImpact of Misinformation and Deepfakes on Political DiscourseThe episode reiterates the disquieting ease with which political discourse can be manipulated through deepfakes and misinformation. Showcasing the capabilities of AI, Tracy recalls a deepfake scam involving fake professional meetings which led to financial fraud. These examples underscore the potential for significant damage when such technology is applied maliciously. Hillary emphasizes the critical need to approach online information with a keen eye, pondering the origins and credibility of what is presented. Both Tracy and Hillary stress the importance of developing a defensive posture towards unsolicited information. As the blurring lines between authentic and engineered content could have severe repercussions for individual decisions and broader societal issues.
Stress Testing and Mitigating Disinformation in Election Security StrategiesThe Role of Stress Tests in Election SecurityHillary and Tracy discuss the importance of conducting stress tests to preemptively identify and mitigate vulnerabilities within election systems. These tests, which include red teaming exercises and white hat hacking, are designed to replicate real-world attacks and assess the systems' responses under duress. By simulating different attack vectors, election officials can understand how their infrastructure holds up against various cybersecurity threats. This information can be used to make necessary improvements to enhance security. The goal of these stress tests is to identify weaknesses before they can be exploited by malicious actors. Thereby ensuring the integrity of the electoral process.
Mitigating the Impact of DisinformationThe conversation emphasizes the urgent need for preemptive measures against disinformation, which has grown more sophisticated with the advent of AI and deepfakes. As these technological advancements make discerning the truth increasingly difficult, it becomes even more crucial for election officials to prepare for the inevitable attempts at spreading falsehoods. Through stress tests that incorporate potential disinformation campaigns, officials can evaluate their preparedness and response strategies. Including public communication plans to counteract misinformation. By considering the psychological and social aspects of election interference, they aim to bolster defenses and ensure voters receive accurate information.
Election Security Concerns: "Other instances are going to happen where criminals are gonna be impersonating legitimate sources to try to suppress voters in that case, or steal credentials, spread malware." — Hillary CooverImportance of Proactive Approaches to Election SafeguardingThe exchange between Tracy and Hillary reveals a clear consensus on the necessity of proactive strategies for protecting elections. Proactively identifying potential threats and securing electoral systems against known and hypothetical cyber attacks are central to defending democratic processes. By focusing on anticipation and mitigation, rather than simply responding to incidents after the fact, authorities can improve election security and reinforce public trust. This proactive stance is also crucial in dealing with the spread of disinformation, which may be specifically tailored to exploit localized vulnerabilities in the electoral infrastructure.
Reflecting on the Challenges of Election Security in the Digital EraThis episode serves as a thorough examination of the challenges posed by digital communication in modern democracies. They delve into the dangers of misinformation and the manipulation of public opinion, highlighting how biases in AI can affect the information that individuals receive. They underscore the importance of stress-testing election systems against digital threats and recognize the complexities inherent to securing contemporary elections. The episode ultimately helps listeners to better grasp the ever-evolving landscape of election security and the continued need for informed, strategic action to safeguard democratic processes.
About Our GuestHillary Coover is one of the hosts of It’s 5:05! Podcast, covering news from Washington, D.C. Hillary is a national security technology expert and accomplished sales leader currently leading product strategy at G2 Ops, Inc.
Episode LinksDeborah Stephens, the Deputy Chief Information Officer for the United States Patent and Trademark Office (USPTO), “grew up” so to speak in the USPTO. Deborah led the USPTO on its agile journey. As the agency took on its “New Ways of Working, '' by moving people and resources closer to the work, she helped empower employees to build and deploy software. Deborah shares how she guided the agency through this 4-year change journey, gaining buy-in from the organization, which was proved by an engagement rate increase from 75% to 85%. Deborah also talks about what it means to be a HISP, running USPTO as a business that is entirely self-sustaining, and, in honor of Women’s History Month, the women who have inspired her along the way.
Key TopicsDeborah Stephens highlights a significant increase in the number of patent and trademark applications received by the USPTO over the years. This growth, from approximately 350,000 to 400,000 applications in 2012, with numbers continuing to rise, underscores the vibrant culture of innovation and creativity in the United States. The upward trend of applications is a positive sign of the country's ongoing commitment to innovation. However, it also presents logistical challenges for the USPTO. Including the need to process a higher volume of applications efficiently while ensuring the quality of examination does not diminish.
Transition to New Ways of Working in U.S. Patent and Trademark Office: "And so in around late 2018, 19, we began our, what we referred to as our agile journey. We named it our New Ways of Working, which essentially is an entire USPTO effort. Including our business unit with 12 other business units, moving people and the resources closer to the work. Giving them that empowerment, to build, deliver, deploy software, product services for our business stakeholders, and that's both internally and externally." — Deborah StephensUSPTO is Adapting to Increased DemandIn response to the growing demand for intellectual property protection, the USPTO has been proactive in seeking ways to maintain and improve service delivery. Deborah discusses the agency's approach to managing the influx of applications, focusing on scalability and efficiency. Despite the challenges posed by the increase in applications, the USPTO's designation as a High Impact Service Provider (HISP) has had minimal impact on its existing customer experience strategy. The agency's foundational commitment to delivering exceptional service to inventors and entrepreneurs remains steadfast. With an emphasis on continuous improvement and the adoption of new strategies to better meet the needs of the U.S. innovation community.
USPTO's Fee-Funded Model and Fiscal StrategyUSPTO’s Fee-Funded OperationsDeborah highlights the United States Patent and Trademark Office's (USPTO) operational model, which is uniquely self-sufficient. Relying entirely on fees collected from patent and trademark applications. This model ensures that the USPTO does not use taxpayer dollars, setting it apart from many other government agencies. By directly linking the agency's funding to the services it provides, the USPTO aligns its goals closely with the needs and successes of its primary users: inventors and businesses seeking intellectual property protection. This connection incentivizes the agency to continuously improve its processes and customer service. Additionally, Deborah mentions a tiered fee system that offers different rates for entities of various sizes. From individual inventors to large corporations. This structure is designed to lower barriers for smaller entities and encourage a wider range of innovation.
USPTO’s Budgetary Discipline and ManagementFacing economic pressures such as inflation, the USPTO's approach to budget management becomes even more pivotal. Deborah discusses the importance of prioritization and strategic decision-making in maintaining the agency's financial health. Despite rising costs, the USPTO strives to keep its budget stable and even reduce it when possible, demonstrating a high level of fiscal responsibility. This is achieved through careful analysis of projects and initiatives, focusing resources on areas that promise the highest impact. The USPTO's disciplined budgetary approach not only ensures its operations are sustainable but also serves as a potential model for other federal agencies. By showcasing how to effectively manage finances in a challenging economic environment, the USPTO underlines the value of strategic planning and prioritization in government fiscal strategy.
Telework Readiness and Agile Transformation at USPTOUSPTO’s Transition to Telework Prior to COVID-19Deborah highlights the USPTO's preparedness for telework well before the COVID-19 pandemic. With a significant portion of the workforce already equipped and familiar with remote working protocols, the USPTO had laid a robust foundation for telework readiness. This foresight into establishing a telework culture not only ensured the continuity of operations during unprecedented times. It also underscored the agency's commitment to leveraging modern work practices. The transition to a fully remote working environment, necessitated by the pandemic, was thus more seamless for the USPTO than for many other organizations. Demonstrating a proactive approach to business continuity planning.
Introducing Change in Remote Work Environments: "There were every 2 weeks of what we refer to as, lunch and learns. And in the beginning, I was the prime speaker, saying, here's our New Ways of Working. Here's the structure. Here's how we're gonna move our processes, our procedures, and people would join in. And it was all remote. I'd have a big TV like producer kind of studio, and I'd be in front of the blue screen and talking to them about this change at least every 2 weeks, if not, sometimes more." — Deborah StephensAgile Transformation and Cultural Shift at USPTOThe shift from traditional waterfall methods to agile methodologies marked a significant transformation within the USPTO. Deborah emphasizes that this transition was not merely about changing project management techniques. It involved a deeper cultural shift within the organization. Achieving buy-in from both individuals and teams was crucial to fostering an environment that embraced agility, empowered employees and encouraged rapid deployment of products. Key to this cultural transformation were regular remote meetings and employee engagement surveys. This played a significant role in understanding and enhancing employee satisfaction. The notable increase in engagement levels from 75% to 85% during this period of change illustrates the effectiveness of the USPTO's approach in not only implementing agile methodologies but also in cultivating a culture that is receptive and adaptive to change.
Tech Landscape and Patent Filing Insights at USPTOUSPTO’s "Fail Fast, Fail Forward" ApproachDeborah shares the USPTO's dynamic approach to technological innovation, encapsulated in the mantra "fail fast, fail forward." This methodology allows the USPTO to quickly test new ideas and technologies, while learning from any setbacks, and refining their strategies efficiently. By fostering an environment where experimentation is encouraged and failure is seen as a stepping stone to success, the agency ensures that it remains at the forefront of technological advancements. This approach is crucial in a rapidly changing tech landscape, as it enables the USPTO to adapt and innovate continuously. Deborah highlights how this philosophy has led to a more agile and responsive IT infrastructure within the agency. One capable of meeting the demands of modern patent and trademark processing.
The Value of Mentorship: "I think you need to establish your go-to network of mentors, and don't be afraid to become a mentor." — Deborah StephensEmphasizing Customer Feedback in Patent and Trademark SubmissionsCarolyn brings attention to the importance of customer feedback in the process of patent and trademark submissions at the USPTO. Deborah explains how the agency values the insights gained from customer experiences and actively seeks out feedback to improve services. Through a variety of channels such as webinars, outreach programs and direct communication through customer service teams, the USPTO gathers valuable input from those who navigate the patent and trademark submission processes. This dedication to understanding and addressing the needs and challenges of its customers has led to significant enhancements in the USPTO's support structures. Deborah further discusses educational efforts aimed at demystifying the complexities of the patent filing process. Thereby making it more accessible and navigable for inventors and businesses alike.
Digital Transformation at USPTOUSPTO’s Move from Paper-Based to Digital SystemsDeborah played a significant role in transitioning the agency from a paper-based application system to a fully digitized process. This monumental task involved not just the scanning of existing paper documents, but also includes integrating OCR technology to make historical patents searchable and accessible in digital form. Despite the sheer scale and potential logistical challenges of digitizing vast amounts of data, the initiative marked a pivotal moment in the agency's history. This transformation was not without its hurdles. Initial resistance to change was a significant barrier that needed careful navigation. However, through strategic planning and a commitment to modernization, the USPTO successfully overcame these challenges. Leading to a more efficient, accessible and streamlined patent application process.
Efficient Budget Management at the USPTO: "Being able to maintain our budget or even maybe decrease the overall budget by 1%, but yet inflation going up 8, 9%, we've been able to do that. And it's about prioritization, and that's part of our New Ways of Working." — Deborah Stephens About Our GuestDeborah Stephens is the Deputy Chief Information Officer (DCIO) for the United States Patent and Trademark Office (USPTO). She has served at the USPTO for more than 30 years in multiple leadership roles, during which she has worked to improve the automated tools and informational resources that facilitate electronic processing of patent applications. In her current role, Deborah is the principal advisor to the Chief Information Officer (CIO) and responsible for managing day-to-day operations of the Office of the Chief Information Officer (OCIO) with significant oversight on information technology (IT) stabilization and modernization efforts. She guides teams towards continual improvements in IT delivery for maximum value to all stakeholders.
Episode LinksAs technology rapidly evolves we as a nation need to anticipate the attacks that may come about as a result of that innovation. Travis Rosiek, the Public Sector CTO at Rubrik and former Leader at the Defense Information Systems Agency (DISA), joins Tech Transforms to talk about how the government’s approach to technology and relationship with industry has evolved over the last twenty years. He also discusses compliance, including FedRAMP compliance, managing the vast amount of data that is generated daily across the government and industry, and the importance of the U.S. Government building cyber resilient systems. Catch all this and more on this episode of Tech Transforms.
Key TopicsTravis discusses the early days of cloud adoption, which were often fueled by misconceptions about its benefits. The migration toward cloud computing was commonly believed to be a cost-effective solution that would reduce expenses and simultaneously enhance security. However, he points out that this was not always the case. Many organizations have since realized that the initial cost of moving to the cloud can vary greatly based on specific use cases and applications. This realization has led to a strategic shift toward what Travis refers to as a "cloud smart" approach. Highlighting the need for a more discerning and tailored evaluation of how cloud resources are utilized.
The Role of Commercial Companies vs. Government in Problem-Solving: "Industry is great about solving problems. You know, driving that capitalism type of culture, building capabilities, selling solutions. And they're quicker to implement, adapt and deploy capabilities where the government is very slow in implementation of these you know, they can figure out the problem." — Travis RosiekThe 'Cloud Smart' Strategic ApproachTaking a "cloud smart" approach indicates a maturation in the perception of cloud services by government agencies and businesses alike. Rather than a blanket strategy of cloud-first, Travis indicates that there is now a more nuanced consideration of when and how to use cloud services. He underscores the importance of aligning cloud adoption with an organization's unique needs. Including the potential scalability, security and cost implications. This approach suggests a collaborative and informed decision-making process. Recognizing that the cloud offers a variety of solutions, each with different features, advantages and trade-offs that must be carefully weighed against organizational goals and objectives.
Navigating Cybersecurity Practices in Cloud MigrationThe Balance of Technical and Non-Technical Implications in Cloud MigrationTravis discusses the intricacies involved in organizational cloud migrations. Emphasizing that these undertakings are not solely about technological transitions but also encompass a variety of non-technical considerations. The shift to cloud-based services goes beyond mere data storage and infrastructure changes. It affects strategic business decisions, financial planning and operational workflows. Necessitating a comprehensive evaluation of both the potential benefits and the challenges. Organizations must be acutely aware of the detailed shared responsibility models that cloud service providers outline, which delineate the security obligations of the provider versus the customer. Understanding these responsibilities helps in effectively managing the risks associated with cloud computing.
The Importance of Human Oversight in AI: "But you still can't take the human out of the loop." — Travis RosiekThe Demand for Advanced Cybersecurity Practices in Multi-Cloud EnvironmentsTravis highlights a significant challenge in the cybersecurity landscape, which is the scarcity of skilled professionals equipped to manage and protect complex multi-cloud and hybrid environments. As organizations increasingly adopt a mix of cloud services and on-premises solutions, the demand for cybersecurity practitioners with the necessary expertise to navigate this complexity grows. However, attracting and retaining such talent is difficult due to competitive job markets and the limitations of government pay scales. This is compounded by the extensive skill set required for modern cloud environments, including not only security but also knowledge of cloud architecture, compliance and various cloud-specific technologies. Travis underscores the need for specialized personnel capable of addressing the advanced cybersecurity concerns that arise from this intricate, dynamic infrastructure.
The Evolution of FedRAMP ComplianceFedRAMP Compliance: A Shared BurdenTravis sheds light on the evolution of the Federal Risk and Authorization Management Program (FedRAMP), a government-wide program that promotes the adoption of secure cloud services across the federal government by providing a standardized approach to security assessment, authorization and continuous monitoring. While it is often perceived as a costly and time-consuming barrier for vendors seeking to serve government clients, Travis emphasizes that the journey to FedRAMP authorization is not the sole responsibility of vendors. Government sponsors engaged in this process also bear a significant load. This dual burden requires commitment and collaboration from both parties to navigate the complexities involved in achieving FedRAMP compliance.
Strategic Cybersecurity Practices to Navigate FedRAMP Compliance ChallengesTravis goes into further detail regarding the collaborative challenges of attaining FedRAMP compliance. On the government side, a sponsor’s role in shepherding vendors through the process can be incredibly taxing due to staffing and resource constraints. Furthermore, the procedural nature of the FedRAMP framework can prove to be a linear and lengthy ordeal for all involved. Travis suggests that greater investment to ease the procedural efforts for government stakeholders could potentially improve the efficiency of the overall process, helping it to mature and ultimately relieving some of the burden for both vendors and government sponsors.
Addressing Data Volume and Security Risks in Modern Cybersecurity PracticesData Categorization and ClassificationCarolyn highlights the daunting challenge of classifying the vast amounts of data that individuals and organizations are responsible for. Travis acknowledges this burden, especially given the exponential growth of data in today's digital landscape. He underscores that as data multiplies rapidly and spreads across various platforms – from cloud services to mobile devices – accurately categorizing and classifying it becomes more critical yet more difficult. Ensuring the security and proper handling of this data is paramount as mismanagement can lead to significant security breaches and compliance issues.
Cybersecurity in the Era of Cloud and Mobile Computing: "If you can't answer some of those basic questions on visibility, you're gonna struggle protecting it." — Travis RosiekAdapting Cybersecurity Practices to Combat Data Volume SurgeTravis points to a report produced by Rubrik Zero Labs that sheds light on the continuous surge in data volume within organizations, often experiencing growth by significant percentages over short periods. This expansion amplifies the challenge of safeguarding critical information. Moreover, the need to provide accurate access control increases in complexity when data resides in a hybrid environment. This includes multiple clouds, on-premise servers, and SaaS applications. The continuous monitoring and protection of data across these diverse and dynamic environments present an ongoing challenge for data security professionals.
Complexities in Data Access ControlsCarolyn and Travis discuss the need for visibility in distributed data environments, as knowing what data exists, where it is stored and who has access to it is fundamental to securing it. Travis advocates for the NIST Special Publication 800-160 as an additional resource that can guide organizations toward building cyber resilient systems. Its principles of anticipating, withstanding, recovering and adapting offer a strategic approach to not just responding to cyber threats. It also prepares for and prevents potential data breaches in complex IT and data environments.
Strategic Alignment of Cybersecurity Practices with Governmental Objectives and Zero Trust PrinciplesAligning Cybersecurity Practices with Governmental ObjectivesWhen considering the acquisition of technology within government entities, Travis highlights the importance of aligning with governmental objectives. Especially when it pertains to national defense, scalability becomes a paramount factor, as the technology adopted must cater to expansive operations and adhere to rigorous standards of security and efficiency. In the military and defense sectors, technologies must not only serve unique and highly specialized purposes but also be viable on a large scale. Travis notes that achieving this balance often requires a nuanced approach that can accommodate the specific needs of government operations, while also being mindful of the rapidly evolving landscape of technology.
Cybersecurity and Organizational Resilience: "Having a false sense of security, you know, in anything we build, overly trusting things or having a false sense of security, is probably our Achilles' heel." — Travis RosiekEmphasizing Security Principles and Zero TrustTravis underscores the central role of security principles in the process of technology acquisition and he places particular emphasis on the concept of Zero Trust. An approach to cybersecurity that operates on the assumption that breaches are inevitable and thus requires constant verification of all users within an organization's network. Travis argues that adopting a zero trust framework is crucial for government agencies to protect against a vast array of cyber threats. By following this principle, organizations can ensure that their acquisition of technology not only meets current operational demands but is also prepared to withstand the sophisticated and ever-changing tactics of adversaries in cyberspace.
The ABCs of Technology ImplementationThe Adoption, Buying and Creating StrategyTravis reflects on a strategic approach he learned during his tenure at DISA, known as the ABCs. A methodology imparted by then DISA director General Charlie Croom. This strategy prioritizes the use of existing commercial technologies, emphasizing 'adoption' as the primary step. By leveraging commercially available tech, organizations can tap into advanced capabilities and integrate them into their operations swiftly. The 'buy' component encourages the procurement of already fielded technologies or platforms. This may not be commercially created but has been proven in practical governmental applications. Lastly, 'create' is seen as a last resort. Reserved for instances where the needs are so specialized or critical that a bespoke solution is warranted. Often due to unique use cases or strict national security concerns.
Strategic Balancing of Commercial Speed and Government Foresight in Cybersecurity PracticesIn discussing the rationale behind the ABCs framework, Travis reveals the nuanced balance required in government tech implementations. While commercial entities' speed to deploy novel solutions can address particular gaps, government institutions often play a crucial role in identifying and tackling long-term, complex challenges. Especially in defense, the need to build solutions from the ground up may arise when existing products fail to meet the stringent requirements of security-sensitive operations. Conversely, commercial technology's versatility is a critical asset. This marked a shift from the government's historical tendency to primarily develop its own technology solutions. Travis urges organizations to use this strategic framework to make informed, prudent decisions that consider both immediate needs and long-term strategic objectives.
About Our GuestTravis Rosiek is a highly accomplished cyber security executive with more than 20 years in the industry. He has built and grown cybersecurity companies and led large cybersecurity programs within the U.S. Department of Defense (DoD). His experience spans driving innovation as a cybersecurity leader for global organizations and CISOs, to corporate executive building products and services. His impact has helped lead to successful IPOs (FireEye) and acquisitions (BluVector by Comcast).
As a Cyber Leader in the U.S. DoD, he has been awarded the Annual Individual Award for Defending the DoD’s Networks. Travis currently serves as the Public Sector CTO at Rubrik helping organizations become more cyber and data resilient. Prior to Rubrik, Travis held several leadership roles including the Chief Technology and Strategy Officer at BluVector, CTO at Tychon, Federal CTO at FireEye, a Principal at Intel Security/McAfee and Leader at the Defense Information Systems Agency (DISA).
He earned a Certificate from GWU in Executive Leadership and graduated from West Virginia University with Honors while earning multiple Engineering degrees. He also was one of the first of ten students from across the nation to be awarded a scholarship from the DoD/NSA’s in cybersecurity. His pioneering mindset has helped him better secure our nation and commercial critical infrastructure. Additionally, Travis is an invited speaker, author (blogs, journals, books) and has also served on the NSTAC, ICIT Fellow and multiple advisory boards.
Episode LinksSebastian Taphanel has spent his life on the cutting edge of technology and innovation. This week on Tech Transforms, Sebastian is sharing tales and lessons learned from his 20 years in DoD Special Ops and intelligence and 20 years implementing sound security engineering practices focused on implementing zero trust and highly resilient environments. Join Sebastian as he recounts his time in Special Forces taking his units out of the dark ages from secure fax communications to setting up an intranet, and how he continued with that innovative spirit through his 40-year career. He also shares his new passion, encouraging the industry to utilize disabled veterans to help fill both the cybersecurity and AI workforce gaps. They, after all, already have a call for the mission.
Key TopicsSebastian Taphanel’s experience spans twenty years in DOD Special Ops and Intelligence, followed by consulting in security engineering. The focal point of this episode is his role in advancing cybersecurity practices at the ODNI. Particularly emphasizing resilient cloud-based environments.
Sebastian describes the quick adaptation during the pandemic which led to the rollout of an ad hoc cloud-based workspace to ensure the ODNI's mission could endure despite the workforce being remote. GCC High, or Government Commercial Cloud High as conceived by Microsoft, is revealed as the successor to the initial setup. Providing a more secure platform managed strictly by U.S. persons. The approach highlighted the agility of cloud technology for remote collaboration within federal agencies.
Cybersecurity in Intelligence Sharing: "Essentially, reciprocity is a process and also a culture of accepting each other's risks. And that's really the bottom line on all that." — Sebastian TaphanelUnfolding the GCC High EnvironmentThe intricacies of implementing Microsoft Azure and M365 (Office 365) are detailed as Sebastian underlines their pivotal use in creating an intranet with controlled document sharing and editing. These implementations include robust Mobile Device Management. Then a BYOD Mobile Application Management system that protects sensitive data in government and personal devices. Thereby, ensuring operational security and flexibility.
Special Ops Communication EvolutionSebastian advanced from using secure faxes for interstate communication within military units to establishing a multi-state secure WAN. This resulted in a significant leap in communication efficacy for special operations. Sebastian shared the potency of secure, cloud-based tools in streamlining and securing government communications. As well as their inherent adaptability to contemporary operational needs.
Zero Trust Implementation and Reciprocity in Security Controls: "Reciprocity, in some circles, it's a dirty word. Because everybody wants to do it, but nobody really wants to be first." — Sebastian TaphanelThe Shift to Cybersecurity Training and AI Special Ops to Cyber Ops: Training Disabled Veterans to Bridge the Cybersecurity Workforce GapSebastian recognizes the increasing importance of cybersecurity expertise in today's digital landscape. He points out the significant gap in the cybersecurity workforce and the untapped potential of disabled veterans who can be trained to meet this demand. This shift towards prioritizing cybersecurity skills reflects the industry's evolution as organizations increasingly rely on digital infrastructure. Thus, creating a fertile ground for cyber threats. By focusing on equipping disabled veterans, who already possess a strong sense of duty and protection, with the necessary technical skills to combat these threats, Sebastian believes that we can build a robust cybersecurity force that benefits not just the veterans but the nation's overall security posture as well.
Training Disabled Veterans for Cybersecurity and AIBuilding upon his own transition from a military career to cybersecurity, Sebastian is passionate about creating opportunities for disabled veterans in the field. His experience has shown him that these individuals, with their ingrained ethos of national service, can continue their mission through careers in cybersecurity and artificial intelligence. Sebastian advocates for collaborations with major tech companies and training providers to establish programs specifically tailored for veterans. These developmental opportunities can help translate military competencies into civilian technology roles. As AI continues to influence various industry sectors, including cybersecurity, the need for skilled professionals who can leverage AI effectively is critical. By providing appropriate training and mentorship, Sebastian sees disabled veterans playing an integral role in shaping the future of cybersecurity and AI.
Special Ops Veteran Illuminates Zero Trust as a Data-Centric Security Model and the Strategic Role of AI in CybersecurityZero Trust as a Data-Centric Security ModelIn the evolving landscape of cybersecurity, Sebastian brings to light the concept of zero trust. A framework pivoting away from traditional perimeter security to a data-centric model. He highlights zero trust as a foundational approach, which is shaping the way organizations safeguard their data by assuming no implicit trust, and by verifying every access request as if it originates from an untrusted network. Unlike the historical castle-and-moat defense strategy which relied heavily on securing the perimeters of a network, this paradigm shift focuses on securing the data itself, regardless of its location. Zero trust operates on the fundamental belief that trust is a vulnerability. Thereby, anchoring on the principle that both internal and external threats exist on the network at all times. It necessitates continuous validation of the security posture and privileges for each user and device attempting to access resources on a network.
Zero Trust as a Data-Centric Security Model: “Zero trust now has essentially redrawn the lines for cybersecurity professionals and IT professionals. And I will say it’s an absolutely data-centric model. Whereas in previous decades, we looked at network centric security models.” — Sebastian TaphanelImplementing Zero Trust in Special OpsZero trust extends beyond theoretical formulations, requiring hands-on execution and strategic coherence. As Sebastian explains, the principle of reciprocity plays a vital role in the context of security authorizations among different agencies. It suggests that the security controls and standards established by one agency should be acknowledged and accepted by another. Thus, avoiding redundant security assessments and facilitating smoother inter-agency cooperation. However, applying such principles in practice has been sporadic across organizations, often hindered by a reluctance to accept shared risks. Driving home the notion that strategic plans must be actionable, Sebastian underscores the critical need to dovetail high-level strategies with ground-level tactical measures. Ensuring these security frameworks are not merely aspirational documents but translate into concrete protective actions.
Special Ops in Cybersecurity: Harnessing AI and ML for Enhanced Defense CapabilitiesAmidst rapid technological advances, artificial intelligence (AI) and machine learning (ML) are being called upon to bolster cybersecurity operations. Sebastian champions the idea that AI and ML technologies are indispensable tools for cyber professionals who are inundated with massive volumes of data. By synthesizing information and automating responses to security incidents, these technologies augment the human workforce and fill critical gaps in capabilities. The agility of these tools enables a swift and accurate response to emerging threats and anomalies. Allowing organizations to pivot and adapt to the dynamic cyber landscape. For cybersecurity operators, the incorporation of AI and ML translates to strengthened defenses, enriched sense-making capabilities, and enhanced decision making processes. In a field marked by a scarcity of skilled professionals and a deluge of sophisticated cyber threats, the deployment of intelligent systems is no longer a luxury, it is imperative for the preservation of cybersecurity infrastructures.
Looking Ahead: Collaboration, Reciprocity and AI/ML WorkforceAI/ML as a Cybersecurity Force MultiplierSebastian highlights the untapped potential of artificial intelligence and machine learning (AI/ML) as critical tools that can amplify the capabilities within the cybersecurity realm. As Sebastian provides his insights on the importance of AI/ML, it becomes clear that these technologies will serve as force multipliers, aiding overwhelmed cybersecurity professionals dealing with vast arrays of data. The envisaged role of AI/ML is to streamline sense making processes and facilitate prompt, accurate cyber response actions to threats and vulnerabilities. Sebastian portrays a future where strategic use of AI/ML enables swift and informed decision-making, freeing cybersecurity operatives to focus on critical tasks that require their expertise.
AI/ML as a Cybersecurity Force Multiplier: “I believe what’s going to be needed is the understanding, a training and culture that accepts AI/ML as an enabler.” — Sebastian TaphanelEmpowering Special Ops Veterans for the Future Cybersecurity and AI/ML WorkforceSebastian asserts the urgency to prepare and equip individuals for the cybersecurity and AI/ML workforce. He envisions an actionable plan to invigorate the employment landscape, creating a resilient front in the fight against cyber threats. Sebastian calls for a strategic focus on training and knowledge dissemination, particularly for disabled veterans, to incorporate them into positions where they can continue serving the nation's interests in the digital domain. Recognizing the fast evolving nature of these fields, he stresses the need for a workforce that not only understands current technologies but can also adapt to emerging trends. Ensuring that collective efforts in data protection and cybersecurity are robust and responsive to an ever-changing threat landscape.
About Our GuestSebastian Taphanel blends a more than 20-year DoD Special Ops and intelligence career with more than 20 years of sound security engineering practices focused on implementing Zero Trust and highly resilient environments through the use of innovative technologies and common sense business practices.
The real question is, what doesn’t Dr. Amy Hamilton do? She’s currently the visiting Faculty Chair for the Department of Energy (DOE) at National Defense University and the DOE Senior Advisor for National Cybersecurity Policy and Programs, and has had previous stops in the U.S. Army Reserves, NORAD and U.S. European Command, just to name a few.
At National Defense University, Amy draws on all of this expertise to educate the workforce on AI and finding the right balance between automation and workforce training. Amy also explores how she teaches her students that cybersecurity has to be more than a 9-5 job, the balance of security vs. convenience, and how it will take the entire country getting on board to make the implementation of cybersecurity best practices truly possible. In this episode, we also dive into the realm of operational technology and the need to look to zero trust as we allow more smart devices into our lives and government ecosystems.
Key TopicsDr. Amy Hamilton underlines the capabilities of artificial intelligence to streamline time-consuming processes, specifically the creation of abstracts. This innovation allows for a transition from mundane, repetitive tasks to pursuits that require a deeper cognitive investment. Therefore, elevating the nature of the workforce's endeavors. Dr. Hamilton's discussion focuses on the practical applications of this technology, and she cites an instance from the National Defense University's annual Cyber Beacon Conference. Here, participants were challenged to distinguish between AI-generated and human-generated abstracts, often finding it challenging to tell them apart. This exercise not only highlighted AI's proficiency but also introduced the workforce to the safe and practical application of this emergent technology.
How do we use AI in a way that goes from low-value to high-value work? If I'm not doing abstract, what other things could I be doing and spending my brain calories towards? - Dr. Amy HamiltonPreparing the Workforce for Cyber InnovationDr. Hamilton stresses the necessity for workforce education in the context of AI and automation. Aiming for a future where employees are neither intimidated by nor unfamiliar with the advancing technological landscape. She illustrates the Department of Energy's proactive role in integrating AI into its training programs. Thus, ensuring that employees are well-acquainted with both the operational and potential ethical dimensions of AI deployment. Acknowledging the diverse range of operations within the DOE, including nuclear and environmental management, Dr. Hamilton notes that the appropriateness of AI application varies by context. Signifying the department's nuanced approach to the introduction of these technologies. Through education and exposure to use cases within a controlled environment, Dr. Hamilton envisions a workforce that is not only comfortable with AI but can also leverage it to enhance productivity and safety in their respective fields.
Cyber Innovation and Collaboration in Government EnvironmentsDr. Hamilton's Role at National Defense UniversityAmy serves as a crucial beacon for educating Department of Defense personnel on comprehensive government functions. With a focus on the distinct agencies and their interaction within the broader governmental ecosystem, she acts as a conduit, clarifying for her students the intricate dance of interagency collaboration. Grants of knowledge on how certain branches, like the Treasury, interact during cyber events. Or the functions of varied components within the agency, serve to demystify the convoluted nature of interdepartmental cooperation. Her teaching elevates students' comprehension of the interconnected roles and responsibilities that propel our government forward.
Environment for Cyber InnovationAt National Defense University, there's a particular distinction made between no-tolerance environments. Such as nuclear facilities, where repetitiveness and extreme scrutiny are valued over experimentation and open science labs that thrive on creativity and incessant innovation. Dr. Amy Hamilton underlines this dichotomy. She established the need for both the rigid reliability of technology in some contexts and the unabated exploration for new horizons in others. These contrasting settings ensure the Department of Energy's multifaceted missions are maneuvered through a lens of both caution and curiosity. Across a breadth of projects from the highly sensitive to the openly experimental.
Attracting Talent to Federal GovernmentThe College of Information in Cyberspace, where Amy engages with the bright minds of the defense community, presents an academic path tailored for mid to senior career professionals. With a suite of master's degrees and certificate programs, the college not only imparts education but also fosters an ecosystem ripe for nurturing government leaders of the future. Despite the widespread perception of financial hurdles within government roles compared to private sectors, Dr. Hamilton articulates a potent alternative allure. The mission-driven nature of public service. This inherent value proposition attracts those who yearn to contribute to a greater cause beyond monetary gain, ensuring a continual influx of devotion and expertise within federal ranks.
So I think there's a huge amount of value of what flexibility of recognizing industry experience in cybersecurity can be very, very useful. But I also think, like, how do we attract people in the federal government when we don't have that kind of financial ability to reward? And I think it's reward by mission. - Dr. Amy HamiltonFostering Diversity and Cyber InnovationCyber Outreach and Advocating DiversityDr. Hamilton touches on the vital role of cyber outreach and advocating for diversity in the field of cybersecurity. She brings up Kennedy Taylor, who is making strides as Miss Maryland by combining her cyber expertise with her platform in beauty pageantry. She engages and educates young people, especially girls, about the significance of cybersecurity. Amy highlights the potential of such outreach efforts to challenge and change the stereotypes associated with cybersecurity professionals. By leveraging the influence of figures like Miss Maryland, there's an opportunity to inspire a diverse new generation of cybersecurity experts who can bring fresh perspectives to tackling the industry's challenges.
The Need for Cyber InnovationThroughout the discussion, Dr. Amy Hamilton stresses the increased frequency and severity of cybersecurity threats that have surfaced recently. Acknowledging that the traditional cybersecurity models are faltering under these new strains. She calls for innovative thinking and proactive measures to be adopted. Amy notes that measures used in the past, such as security through obscurity, no longer suffice due to the complex and interconnected nature of modern technology. This new reality requires the cybersecurity sector to evolve and embrace zero-trust principles among other modern strategies to safeguard against the continually evolving threat landscape.
How do we correct, just swiftly get around to being able to apply those patches and things that we need to do? And we have to get better out of it because our adversaries are. Our adversaries were taking advantage of this every single day. - Dr. Amy HamiltonAddressing Risk Aversion in CybersecurityIn discussing the inherent risk-aversion in human nature, Dr. Hamilton points out that despite this tendency, convenience often trumps caution, leading to increased vulnerabilities. She suggests that the answer is not to shy away from innovation for fear of risks, but rather utilize it to enhance the safety and functionality of technological systems. Dr. Hamilton also highlights the crucial role that industry partnerships play in this context, suggesting that collaboration between government and private sectors is essential in developing effective and robust cybersecurity defenses. By working together, these entities can find the balance between convenience and security, ensuring a safer digital environment for all users.
Challenges in Implementing Cyber InnovationImportance of User Experience in Cyber InnovationDr. Amy Hamilton brings attention to the crucial role that user experience plays when incorporating automation into the workforce. She contrasts the tedious and often frustrating nature of conventional cybersecurity practices, such as manually sifting through logs, with the potential ease automation can provide. Amy uses the example of e-commerce, where users intuitively navigate online shopping without the need for training to illustrate her point that intuitive design is key to user acceptance of automated systems. By adopting user-friendly automation, employees' tasks can be streamlined allowing them to focus on more complex and engaging aspects of their work.
And so I think that we need to really realize that user experience is important. - Dr. Amy Hamilton AI and Automation in Everyday LifeReflecting on her experience with AI in website design, Amy describes the simplicity and efficiency brought by AI-assisted tools that automatically generate content based on keywords. Thus eliminating the need for extensive technical knowledge in web development. This underscores the tangible benefits of automation for individuals without a background in coding. Moreover, Amy emphasizes the societal shift toward greater reliance on automated systems by referencing Disney World as a model of successful automation integration. The theme park's seamless integration of automated booking systems, fast passes and reservations highlight how well-designed automation can augment the customer experience and efficiency in large-scale operations.
Partnerships in Cyber InnovationThe dialogue shifts toward the collaborative effort required to tackle cybersecurity breaches. Dr. Hamilton mentioned the expansive SolarWinds incident as a key example where AI and automation have a role to play. Amy underscores the significance of industry partnerships and a unified national approach for enhancing cybersecurity. The incident illustrates that automated tools and AI are not only about convenience, they are instrumental in swiftly identifying and rectifying vulnerabilities in complex digital systems. By automating these processes, agencies can respond more effectively to cybersecurity threats, underscoring the need for automation that complements and enhances human efforts in maintaining security.
Educational TechnologiesAmy advocates for the use of educational tools like Khan Academy, which can benefit children by offering a controlled environment for learning. She stresses the importance of early cybersecurity awareness, suggesting that exposure to best practices should align with the first use of digital devices. This early introduction to cybersecurity principles, aided by educational technologies, is vital in preparing the next generation to navigate the expanding digital frontier securely. Automation in education, therefore, serves a dual purpose, streamlining the learning process while simultaneously fostering a culture of digital safety awareness from a young age.
Executive Orders and Collaboration for Cyber InnovationThe Administration's Challenges in Artificial Intelligence RegulationDr. Amy Hamilton discusses the executive order on artificial intelligence. She acknowledged the inherent challenges of being a government pioneer in regulating groundbreaking technology. She compares the order to earlier attempts at cybersecurity regulation and the long-standing effects those have on policy today. Dr. Hamilton predicts that in hindsight, we may perceive today's orders as early steps in an evolving landscape. Given her past experience at the OMB executive office of the president, she understands the complexity of crafting policy that will need to adapt as technology progresses.
Collaborative Efforts for Cybersecurity Workforce DevelopmentDr. Amy Hamilton underlines the need for collaborative synergy between government and industry to foster a robust cybersecurity workforce. With growing intellectual property theft, especially from China, she stresses that safeguarding proprietary information is not just an industry burden but also a national and allied concern. Dr. Hamilton points out that partnerships with non-profit organizations play a vital role in shaping a national response to cybersecurity challenges. Such alliances are vital for maintaining cybersecurity and counteracting espionage activities that impact not only the US but also its international partners.
Public Awareness and Cybersecurity BreachesCarolyn and Dr. Amy Hamilton echo a mutual frustration over the general public's lack of awareness regarding cybersecurity threats. They underscore the gravity of cybersecurity breaches and the espionage activities that target nations' security and economic well-being. Dr. Hamilton uses historical incidents to illustrate the ongoing battle against cyber threats and the need for heightened public consciousness. The discussion implies that bolstering public awareness and concern is pivotal in the collective effort to enhance national cybersecurity.
About Our GuestAmy S. Hamilton, Ph.D. is the Department of Energy Senior Advisor for National Cybersecurity Policy and Programs. Additionally, she is the Visiting Faculty Chair for the Department of Energy at National Defense University. She served two years as a senior cyber security policy analyst at the Office of Management and Budget, Executive Office of the President. She served in the Michigan Army National Guard as a communications specialist and was commissioned into the U.S. Army Officer Signal Corp, serving on Active Duty and later the U.S. Army Reserves. She has worked at both the U.S. European Command and the U.S. Northern Command & North American Aerospace Defense Command (NORAD) on multiple communications and IT projects.
She became a certified Project Management Professional through the Project Management Institute in 2007 and earned her Certified Information Security Manager certification in 2011. And she presented “The Secret to Life from a PMP” at TEDxStuttgart in September 2016. She taught Project Management Tools at Colorado Technical University and was a facilitator for the Master’s Degree Program in Project Management for Boston University. She is an award-winning public speaker and has presented in over twenty countries on overcoming adversity, reaching your dreams, cybersecurity, and project management.
Dr. Hamilton holds a Bachelor of Science (BS) in Geography from Eastern Michigan University, a Master of Science (MS) in Urban Studies from Georgia State University, Master in Computer Science (MSc) from the University of Liverpool, Master Certificate in Project Management (PM) and Chief Information Officer (CIO) from the National Defense University, and completed the U.S. Air University, Air War College. She completed her Doctor of Philosophy (PhD) at Regent University in its Organizational Leadership Program with a dissertation on “Unexpected Virtual Leadership: The Lived Experience of U.S. Government IT and Cybersecurity Leaders transitioning from physical to virtual space for COVID-19.” Amy’s motto is: “A woman who is passionate about project management, public speaking, and shoes.”
Episode LinksHave you heard? Data is the new oil. JR Williamson, Senior Vice President and Chief Information Security Officer at Leidos, is here to explain where data’s value comes from, the data lifecycle and why it is essential for organizations to understand both of those things in order to protect this valuable resource. Join us as JR breaks it all down and also explores the concept he dubbed “risktasity,” which he uses to describe the elasticity of rigor based on risk. As he says, “when risk is high, rigor should be high, but when risk is low, rigor should be low.”
Key TopicsJR provided a snapshot into the past, comparing cybersecurity practices from the 1990s to what we see today. With 37 years of experience, he recalled a time when IT systems were centralized and the attack surfaces were significantly smaller. Contrasting this with the present scenario, he spoke about the current state where the migration to cloud services has expanded the attack surface. JR noted an increase in the complexity of cyber threats due to the widespread distribution of networks. Plus, the need for anytime-anywhere access to data. He stressed the transition from a focus on network security to a data-centric approach, where protecting data wherever it resides has become a paramount concern.
Data Life Cycle: "So part of understanding, the data itself is the data's life cycle. How does it get created? And how does it get managed? How does it evolve? What is its life cycle cradle to grave? Who needs access to it? And when they need access to it, where do they need access to it? It's part of its evolution. Does it get transformed? And sometimes back to the risktasity model, the data may enter the content life cycle here at some level. But then over its evolution may raise, up higher." — JR WilliamsonThe New Oil: DataIn the world JR navigates, data is akin to oil. A resource that when refined, can power decisions and create strategic advantages. He passionately elucidated on the essence of data, not just as standalone bits and bytes, but as a precursor to insights that drive informed decisions. Addressing the comparison between data and oil, JR stressed that the real value emerges from what the data is transformed into; actionable insights for decision-making. Whether it's about responding with agility in competitive marketplaces or in the context of national defense, delivering insights at an unmatched speed is where significant triumphs are secured.
Importance of Data SecurityJR Williamson on Data and "Risktasity"JR Williamson stresses the heightened necessity of enforcing security measures that accompany data wherever it resides. As the IT landscape has evolved, the focus has broadened from a traditional, perimeter-based security approach towards more data-centric strategies. He articulates the complexity that comes with managing and safeguarding data in a dispersed environment. Where data no longer resides within the confines of a controlled network but spans across a myriad of locations, endpoints and even devices. This shift has rendered traditional security models somewhat obsolete, necessitating a more nuanced approach that can adapt to the dynamic nature of data.
The Value of Data in Decision-Making: "The data in and of itself is really not that valuable. Just like oil in and of itself is not that valuable. But what that oil can be transformed into is what's really important, and that's really the concept." — JR Williamson
Data Security ExperiencesBoth Mark and Carolyn resonate with JR's insights, drawing parallels to their own experiences in cybersecurity. Mark appreciates the straightforwardness of JR’s "risktasity" model which advocates for proportional security measures based on the evaluated risk. This principle challenges the one-size-fits-all approach to cybersecurity, fostering a more tailored and efficient allocation of resources. Carolyn, in turn, connects to the conversation with her history of grappling with the intricacies of data classification and control. She acknowledges the tactical significance of understanding which data warrants more stringent protection. Plus, the operational adjustments required to uphold security while enabling access and utility.
Data Governance and Security StrategiesUnderstanding Data Security and LifecycleJR emphasizes the importance of understanding the data's lifecycle. Acknowledging that comprehensive knowledge about how data is created, managed and ultimately disposed of is a cornerstone of effective cybersecurity. This involves not only recognizing the data's trajectory but also identifying who needs access to it, under what conditions, and how it may evolve or be transformed throughout its lifecycle. By establishing such a deep understanding, JR suggests that it becomes possible to design governance systems that are not only effective in theory, but also practical and integrated into the daily operations of an organization.
Strategy and Organizational SupportTransitioning from a theoretical framework to practical execution, JR discusses the necessity of an effective data protection model that can operationalize the overarching strategy. To accomplish this, an organization must develop a structure that aligns with and supports the strategic objectives. JR identifies that existing structures often serve as the most significant barriers when agencies work on implementing new cybersecurity strategies. Organizations must be prepared to confront and renovate legacy systems and management frameworks. This is a challenge that became increasingly evident as organizations rapidly shifted to cloud services to accommodate remote work during the pandemic.
Insights from Data Security and AI ImpactTransformation of Data into Actionable InsightsLike oil, data's true value isn't in its raw form. It is in the conversion process, which transforms it into insights for decision-making. He reflects on the progression of data turning into information, which then evolves into knowledge, culminating in actionable insights. Just as the versatility of oil lies in its ability to be refined into various fuels and materials, the potential of data is unlocked when it is analyzed and distilled into insights that inform crucial decisions. JR emphasizes that the effectiveness of insights hinges not just on accuracy. It is also on understanding the context in which these insights are applied. He suggests that these refined insights are close to competitive advantages. They enable quicker and more informed decision making in mission critical environments.
The Importance of Data Insight in Business: "Getting the insight in and of itself is important. But combining that insight with understanding of the problem we're trying to solve is really where the competitive advantage comes into play." — JR WilliamsonAI's Speed Impact on Cybersecurity and DefenseJR expresses apprehension regarding artificial intelligence's acceleration and its implications for cybersecurity and defense. This unease stems from AI's capability to operate at a pace vastly superior to human capacity. Such rapid capabilities could lead to a perpetual struggle for cybersecurity professionals, who are tasked with defending against AI-driven attacks that continually outpace their responses. For organizations to not only protect themselves but also remain competitive, JR advocates for the adoption of similar AI technologies. By leveraging advanced tools, organizations can preemptively identify vulnerabilities and secure them before they are exploited by adversaries. He alludes to an emerging arms race in cybersecurity, driven by AI advancements that necessitate a proactive rather than reactive approach to digital threats.
Shifting Mindset in Data Security and Zero Trust ArchitectureBroader Perspective on Defensive Data SecurityCarolyn and Mark, touching on the complexities of cybersecurity, speculate about a potential paradigm shift. Rather than focusing solely on prevention, they wonder if the strategy might pivot towards containment and control once threats are within the system. JR agrees that in today's vast and interconnected digital environment, absolute prevention is increasingly challenging. Though cybersecurity has traditionally been likened to reinforcing a castle's walls, JR argues that due to the dispersed nature of modern networks and cloud computing, this approach is becoming outdated. Instead, organizations need to be agile and resilient, with security measures embedded within the data and applications themselves, ensuring they can quickly detect, mitigate and recover from breaches.
Dissecting the Concept of Zero Trust ArchitectureJR expresses discontent with the term "zero trust" due to its implications of offering no trust whatsoever, which would stifle any exchange of information. He advocates for the terms "earned trust" or "managed trust" to more aptly describe the nuanced relationship between users and the systems they interact with. Security architecture, JR illustrates, should not solely rely on verifying users' identities. It has to account for the integrity and security posture of the devices and locations being used to access the data. By meticulously understanding which data are most sensitive and their lifecycles, organizations can ensure that access controls are rigorously applied where necessary. This is based on the type of data, the user's context and the access environment. This nuanced approach is fundamental in constructing a robust and adaptive zero trust architecture that evolves along with the organizational ecosystem.
About Our GuestsJR Williamson is accountable for information security strategy, business enablement, governance, risk, cybersecurity operations and classified IT at Leidos. JR is a CISSP and Six Sigma Black Belt. He serves on the Microsoft CSO Council, the Security 50, the Gartner Advisory Board, the Executive Security Action Forum Program Committee, and the DIB Sector Coordinating Council. He is also part of the WashingtonExec CISOs, the Evanta CISO Council, the National Security Agency Enduring Security Framework team, and is the Chairman of the Board of the Internet Security Alliance.
Episode LinksWhat will 2024 have in store for technology development and regulation? Our hosts, Carolyn Ford and Mark Senell, sat down with Roger Cressey, Partner at Mountain Wave Ventures, Ross Nodurft, Executive Director of the Alliance for Digital Innovation and Willie Hicks, Public Sector Chief Technologist for Dynatrace, to discuss their 2024 predictions. Discover what the experts think will occur next year in terms of FedRAMP, AI regulation, Zero Trust and user experience.
Key TopicsRoss predicts that in 2024, FedRAMP will be completely reauthorized based on a pending OMB memo that is expected to be finalized in late 2023. This revamp is intended to streamline and improve the FedRAMP authorization process to facilitate faster adoption of cloud-based solutions in government.
However, Roger believes the changes could temporarily slow things down as agencies take time to understand the implications of the new FedRAMP structure on their systems and assess risks. This could require investments from industry as well to meet new requirements that emerge.
FedRAMP 2024: "I think it's going to have a lot of agencies take a hard look at their risk and decide where they want to elevate certain high-valued assets, high-valued systems, high-valued programs, and the authorizations themselves are gonna raise in their level." — Ross NodurftShift From Moderate Baseline to Higher Baseline of ControlsAs part of the FedRAMP reauthorization, Ross expects many agencies will shift their systems from a moderate baseline to a higher baseline of security controls. With more interconnected systems and datasets, agencies will want heightened protections in place.
Roger concurs that the increased scrutiny on risks coming out of the FedRAMP changes will lead organizations, especially those managing high-value assets, to pursue FedRAMP High authorizations more frequently.
Increased Demand for a FedRAMP High EnvironmentGiven the predictions around agencies elevating their security thresholds, Willie asks Ross whether the pipeline of solutions currently pursuing FedRAMP High authorizations could face disruptions from new program requirements.
Ross believes there will be some temporary slowdowns as changes are absorbed. However, he notes that the goals of the reauthorization are to increase flexibility and accessibility of authorizations. So over time, the new structure aims to accelerate FedRAMP High adoption.
2024 Predictions: Navigating FedRAMP Changes While Maintaining Industry MomentumAs Ross highlighted, the intent of the FedRAMP reauthorization is to help industry get solutions to market faster. But in the short-term, there could be some complications as vendors have to realign to new standards and processes.
Willie notes that companies like Dynatrace have already begun working towards FedRAMP High in anticipation of rising customer demand. But sudden shifts in requirements could impact those efforts, so he hopes there will be considerations for solutions currently undergoing authorizations.
2024 Predictions on Cybersecurity TrendsZero Trust FrameworkRoger discusses how zero trust architectures are progressing forward in adoption, even though the concept has lost some of its previous buzz. The zero trust memo is still in place, people are budgeting for zero trust and funding is starting to be allocated towards implementation.
As Willie points out, every agency he works with is developing zero trust strategies and architectures. However, he notes these architectures can be extremely complex, especially when adding in cloud and containerized environments.
2024 Predictions: Observability Critical for Security in Complex Cloud EnvironmentsRoss echoes Willie's point that there is an increasing movement towards cloud-based environments. This is driving changes to FedRAMP to accommodate the proliferation of SaaS applications.
With more enterprise environments leveraging SaaS apps, complexity is being introduced. Ross predicts that to protect, understand and maintain visibility across such complex environments with many different applications, overarching observability will become a necessity.
Impact of the Shift Towards Cloud-Based Environments and SaaS ApplicationsThe shift towards cloud-based environments and SaaS applications ties back to the FedRAMP changes and predictions from Ross. As agencies move to the cloud and adopt more SaaS apps, they lose visibility and observability.
Willie predicts observability will become "connective tissue" across zero trust architectures to provide that much-needed visibility across various pillars like devices, networks and users.
The Rise of User Experience in Government Systems: "I think we're gonna see more and more, of a focus on user experience because I believe with all the things we're talking about, user experience could be impacted." — Willie HicksImportance of Observability for Visibility and UnderstandingRoger concurs that visibility is crucial for security because "you can't secure what you can't see." He notes that observability and understanding where data is and what apps are doing will become a prerequisite for achieving zero trust.
The Importance of Data Visibility in Security: "Well, I think it's gonna become table stakes, if you will, when it comes to security, because you can't secure what you can't see." — Roger CresseyCarolyn highlights how visibility has been embedded in zero trust frameworks from the beginning. However, Willie predicts its importance will be even more prominent in 2024.
AI and Technology Innovations2024 Predictions: Navigating AI Promise and Pitfalls in the Public SectorRoger highlighted the tremendous upside that AI-enabled customer experience solutions could provide for government agencies in improving efficiency and service delivery. However, he also noted that any negative experiences resulting from these solutions would be heavily scrutinized and amplified. This indicates there may be cautious adoption of AI in government during 2024 as agencies balance potential benefits and risks.
The Importance of Reciprocity in Government Technology: "I just hope they have the wherewithal and the focus to push the right people in the right parts of both the Department of Defense and to the federal civilian side to think about how reciprocity impacts their availability in the marketplace technology or commercial technology solutions out there." — Ross NodurftWillie predicted there would be carefully orchestrated success stories around AI implementations, supporting Roger's point. This suggests that while innovation will continue, government agencies will likely roll out AI solutions slowly and target opportunities where impact can be demonstrated.
Increased Investment in Security and Product InnovationRoger predicted that defensive cyber capabilities enabled by AI will draw greater attention and interest in 2024. Willie noted that AI is also being used in more advanced cyber attacks. Together, these trends indicate there will be an increased focus on using AI responsibly to enhance security while also defending against malicious uses.
On the commercial side, Ross predicted venture capital investment into AI will accelerate in 2024, driving constant product updates across language models and other platforms. This rapid product innovation seems likely to widen the gap with public sector adoption.
2024 Predictions: Balancing AI Progress and Governance in the Public SectorWhile the panelists disagreed on the likelihood of major AI regulations from Congress in 2024, Willie predicted that high-profile incidents involving AI could build pressure for new laws, even if passage takes time. He and Ross suggested implementation of AI guidance for government agencies is more likely in the near term.
The Future Impacts of AI: "I think that the developers of AI are gonna continue to set the agenda, and the deployers, in other words, all the sectors as well as industry sectors, the developers, the deployers are still gonna be playing catch up." — Roger CresseyRoger noted that negative experiences with AI in government would also spur calls for regulation. However, he said acting prematurely without understanding the impacts could pose challenges. Together, these perspectives indicate oversight and governance guardrails for AI will increase but could slow adoption if not balanced thoughtfully.
2024 Predictions: AI Policy Progress and Global Technology LeadershipPotential Dysfunction in Congress Impacting Regulatory ProgressRoger points out the significant disagreement between the House and Senate that could prevent Congress from finding common ground on AI regulation in 2024. The divide relates to whether the focus should be on continuing innovation or implementing more safeguards and oversight. Meaningful AI legislation at a national level would require lengthy deliberation and consensus-building that likely won't occur in an election year.
Potential Motivation for U.S. Innovation by China’s Advancements in AIAccording to Roger, China's rapid advances in AI development and utilization could light a fire under the U.S. administration and Congress to accelerate American innovation in this area. However, the U.S. policy community also wants to ensure AI progresses responsibly. Roger argues China's AI capabilities could be an impetus for shaping U.S. strategy in 2024, balancing both innovation and risk management.
The Global Race for AI Dominance: "Where China is moving rapidly and creatively on AI development, adoption and deployment will be a jet fuel for motivating the administration and congress to do more regarding how can innovation on the U.S. side regarding AI move quicker." — Roger CresseyIndustry Adaptation to Change2024 Predictions: Navigating Changes to FedRAMP and Industry AdaptationRoss discusses some of the challenges the industry may face in adapting to the changes outlined in the anticipated 2023 FedRAMP reauthorization memo. He notes that while the intent of the memo is to streamline and open up the authorization process to allow more applications into the pipeline faster, implementing these changes could initially cause some disruption.
Ross predicts there may be a "learning curve" as agencies and vendors figure out how the changes impact their specific systems and day-to-day operations. This could temporarily slow things down until the new processes are fully understood. However, Ross expects that after this initial bumpy period, the changes will ultimately enable faster movement of applications through the FedRAMP process.
The Government’s Aim to Create a Process for a Smoother TransitionRoss highlights that the government's aim in revising the FedRAMP authorization process is to make it easier for agencies to access and leverage innovative cloud-based technologies. The memo revisions seek to create multiple pathways for obtaining authorizations, rather than just one narrow pipeline that applications must move through.
Discussing the Future of AI: "We gotta talk about, whether it's AI governance, whether it's innovation in AI, it's AI risks, and really understanding how do we balance all 3 of those in a way while we're still moving forward." — Roger CresseyThe hope is that these process improvements will pave the way for more small and medium cloud-based software companies to get their products authorized for use in government. This will give agencies more options and flexibility in adopting modern solutions. However, Ross cautions that in the short-term there may be some disruptions as outlined above.
Predictions for Significant Impact in 2024In terms of predictions for 2024, Ross expects that the FedRAMP changes, combined with broader cloud migration efforts underway in government, will lead more agencies to request higher baseline security authorizations. Where they may have been comfortable with a FedRAMP Moderate authorization previously, Ross predicts agencies will now ask vendors for FedRAMP High in more and more cases. This will likely impact software providers who will have to adapt their systems and applications to meet the more stringent security controls.
About Our GuestsRoss NodurftRoss Nodurft is the Executive Director of the Alliance for Digital Innovation (ADI), a coalition of technology companies focused on bringing commercial, cloud-based solutions to the public sector. ADI focuses on promoting policies that enable IT modernization, cybersecurity, smarter acquisition and workforce development. Prior to joining ADI, Ross spent several years working with industry partners on technology and cybersecurity policy and several years in government, both in the executive and legislative branches, including Chief of the Office of Management and Budgets cyber team in the White House.
Roger CresseyRoger Cressey is a Partner with Mountain Wave Ventures. He previously served as a Senior Vice President at Booz Allen Hamilton, supporting the firm’s cyber security practice in the Middle East. Prior to joining Booz Allen, he was President and Founder of Good Harbor Consulting LLC, a security and risk management consulting firm.
Mr. Cressey’s government service included senior cyber security and counterterrorism positions in the Clinton and Bush Administrations. At the White House, he served as Chief of Staff of the President’s Critical Infrastructure Protection Board from November 2001 – September 2002. He also served as Deputy for Counterterrorism on the National Security Council staff from November 1999 to November 2001. He was responsible for the coordination and implementation of U.S. counterterrorism policy and managed the U.S. Government's response to multiple terrorism incidents, including the Millennium terror alert, the USS COLE attack, and the September 11th attacks.
Willie HicksWillie Hicks is the Public Sector Chief Technologist for Dynatrace. Willie has spent over a decade orchestrating solutions for some of the most complex network environments, from cloud to cloud native applications and microservices. He understands tracking and making sense of systems and data that has grown beyond human ability. Working across engineering and product management to ensure continued growth and speed innovation, he has implemented Artificial Intelligence and automation solutions over hundreds of environments to tame and secure their data.
Episode LinksOn this special So What? episode we go deeper in to some of the top stories being covered on the It’s 5:05! podcast with It’s 5:05! contributing journalist, Tracy Bannon. How are cybersecurity stress tests battling misinformation and aiding in election security? Is AI contributing to election disinformation? How is the CIA using SpyGPT? Come along as Carolyn and Tracy go beyond the headlines to address all these questions and more.
Key TopicsIn their conversation, Carolyn and Tracy discuss the imperative nature of both individuals and organizations in embracing robust cybersecurity measures. As we live in an era where data breaches and cyber attacks are on the rise, the implementation of effective security protocols is not just a matter of regulatory compliance, but also about safeguarding the privacy and personal information of users. Tracy emphasizes the continuous need for cybersecurity vigilance and education, highlighting that it is a shared responsibility. By making use of resources like the CISA cybersecurity workbook, Carolyn suggests that individuals and businesses can receive guidance on developing a more secure online presence, which is crucial in a digital ecosystem where even the smallest vulnerability can be exploited.
Addressing Biases in AI to Align With Public Interest and Democratic ValuesTracy expresses concerns over the biases that can be present in AI systems, which can stem from those who design them or the data they are trained on. Such biases have the potential to impact a vast array of decisions and analyses AI makes, leading to outcomes that may not align with the broad spectrum of public interest and democratic values. An important aspect of responsible AI use is ensuring that these technological systems are created and used in a way that is fair and equitable. This means actively working to identify and correct biases and ensuring transparency in AI operations. Plus, constantly checking that AI applications serve the public good without infringing upon civil liberties or creating divisions within society.
Demystifying Cybersecurity: "We need that public understanding, building this culture of security for everybody, by everybody. It becomes a shared thing, which should be something that we're teaching our children as soon as they are old enough to touch a device." — Tracy BannonThe Proliferation of Personal AI Use in Everyday TasksThe conversation shifts towards the notion of AI agents handling tasks on behalf of humans, a concept both cutting-edge and rife with potential pitfalls. Carolyn and Tracy discuss both the ease and potential risks of entrusting personal tasks to AI. On one hand, these AI agents can simplify life by managing mundane tasks. Optimizing time and resources, and even curating experiences based on an in-depth understanding of personal preferences. Yet, Tracy questions what the trade-off is, considering the amount of personal data that must be shared for AI to become truly "helpful." This gives rise to larger questions related to the surrender of personal agency in decision-making. The erosion of privacy, and the ever-present threat of such tools being exploited for nefarious purposes.
CISA's Cybersecurity WorkbookEnhancing Accessibility with AI Use: Summarizing Complex Documents through Generative ToolsTracy introduces the concept of leveraging generative AI tools such as ChatGPT to summarize lengthy documents. This innovative approach provides a way to digest complex material quickly and efficiently. For instance, users can feed a PDF or a website link into ChatGPT and request a summary which the tool will produce by analyzing the text and presenting the key points. Tracy emphasizes this method as a step toward making dense content like government reports or lengthy executive orders, more accessible. She also transitions to discussing CISA's cybersecurity workbook. Illustrating a movement towards the dissemination of important information in a format that a broader audience can understand and apply, not just tech experts. Tracy appreciates the effort by CISA to create resources that resonate with everyone's level of technical knowledge.
Comprehensive Guidance for Security MeasuresThe comprehensive guide provided by CISA, Tracy notes, is robust in offering detailed strategies for planning and implementing cyber security measures. The workbook does not shy away from diving deep into the assessment of potential cyber risks. It details leading practices that organizations can adopt. Planning for incident response is a highlighted area, acknowledging that security breaches are not a matter of if but when. The workbook thus serves as an invaluable reference for initiating proactive steps to fortify against cyber threats. This level of comprehensive guidance serves not only as a tool for implementing robust security measures. It is also a learning resource that promotes a widespread understanding of best cybersecurity practices.
Government's AI UsePotential Introduction of Generative AI by the CIATracy and Carolyn discuss the CIA's plans to potentially introduce generative AI through a program dubbed "SpyGPT." The idea behind this integration is to enable the parsing and understanding of extensive open-source data more efficiently.
Generative AI, similar in concept to models like ChatGPT, could revolutionize how intelligence agencies handle the vast amounts of data they collect. If implemented, this AI would be able to generate new content based on massive datasets. Providing insights that could be invaluable for intelligence processing. Carolyn raises comparisons to traditional methods of intelligence gathering, noting that such technological advancements could have helped in past events had they been available. In response, Tracy emphasizes the historic struggle of intelligence agencies to rapidly sort through surveillance information. A challenge that tools like SpyGPT could mitigate.
The Double-Edged Sword of AI Use in Predictive AnalysisA tool like SpyGPT has the potential to rapidly identify patterns and connections within data. This could lead to quicker and more accurate intelligence assessments. Carolyn points to the use of crowdsourcing information during the Boston Marathon bombing as an example of how rapid data correlation and analysis can be critical in national security efforts. The ability to predict and possibly prevent future threats could be significantly enhanced.
The Dangers of Internet Era Propaganda: "I can take any idea, and I can generate vast amounts of text in all kinds of tones, from all different kinds of perspectives, and I can make them pretty ideal for Internet era propaganda." — Tracy BannonHowever, as Tracy notes, the power of such technology is a double-edged sword, raising concerns about privacy, the potential for misuse and ethical implications. The conversation raises the specter of a "Minority Report"-esque future, where predictive technology verges on the invasive. Both Tracy and Carolyn agree on the tremendous responsibilities that come with the implementation of generative AI when it intersects with privacy, civil liberties and security.
Election SecurityThe Critical Role of AI Use in Election Security Stress TestingStress testing in the context of election security revolves around rigorously probing the voting system to uncover any flaws or weaknesses. This process requires collaboration between various stakeholders, including the manufacturers of voting machines, software developers and cybersecurity experts. Tracy emphasizes the crucial nature of these simulated attacks or real-world scenarios that help reveal potential points of exploitation within the system. Identifying these vulnerabilities well before an election can give officials the necessary time to address and reinforce weak spots. Ensuring the reliability and resilience of the electoral process against cyber threats.
The AI Use in Unveiling Election System VulnerabilitiesTracy discusses the necessity of not just identifying but also openly revealing discovered vulnerabilities within election systems as a means to foster trust among the populace. Transparency in the security measures taken and the clear communication of vulnerabilities found, when managed properly, instill a higher sense of confidence in the electoral system's integrity. This approach also plays a pivotal role in countering misinformation. By proactively conveying the true state of system security and the efforts being taken to remedy issues. It can help to dismantle unfounded claims and skepticism about the election infrastructure from various sectors of society.
Exploring the Impact of AI Use in Deepfake Technology and Artificial Persona CreationCapabilities of Deepfake Technology and AI-Language ModelsRecent advancements in AI and deepfake technology have brought breathtaking capabilities. Primarily the power to manipulate audio and video content with astounding realism. Tracy emphasizes the profound implications of this tech. Specifically pointing to language models such as "Vall-E," which can simulate a person's voice from just a few seconds of audio input.
The Rise of Deepfakes: "Imagine what's gonna happen with the deepfake. Take a right? I can take your video. I can take your voice." — Tracy BannonThis technology uses sophisticated algorithms to detect nuances in speech patterns. Allowing it to generate new audio that sounds like the targeted individual, effectively putting words into their mouths that they never actually said. This ability extends beyond simple mimicry. It propels the potential for creating audio deepfakes that can be nearly indistinguishable from genuine recordings. Such capabilities raise significant concerns about the reliability of auditory evidence and the ease with which public opinion could be manipulated.
Creation of Artificial Personas Using AI ToolsTracy brings to light the increasingly effortless creation of false personas through AI tools such as ChatGPT, which is an iteration of AI language models capable of generating human-like text. These tools can fabricate compelling narratives and even mimic specific writing styles. It can create non-existent but believable social media profiles or entire personas. Tracy points out how these synthetic entities can be programmed to deliver credible-sounding propaganda, influence political campaigns, or sow discord by spamming internet platforms with targeted misinformation. The creation of these artificial personas signifies a dramatic shift in how information can be disseminated. Posing risks of eroding trust in digital communication and complicating the battle against fake news.
About Our GuestTracy Bannon is a Senior Principal with MITRE Lab's Advanced Software Innovation Center and a contributor to It’s 5:05! podcast. She is an accomplished software architect, engineer, and DevSecOps advisor having worked across commercial and government clients. She thrives on understanding complex problems and working to deliver mission/business value at the speed. She’s passionate about mentoring and training and enjoys community and knowledge-building with teams, clients, and the next generation. Tracy is a long-time advocate for diversity in technology, helping to narrow the gaps as a mentor, sponsor, volunteer, and friend.
Episode LinksAs technology rapidly innovates, it is essential we talk about technology policy. What better way to get in the know than to have an expert break it down for us? Meet Ross Nodurft, the Executive Director of the Alliance for Digital Innovation. Ross dives in, explaining the evolution of FedRAMP controls and the recent, giant, AI Executive Order (EO) from the White House. Listen in to find out what this EO means for the government, the industry and the workforce as the U.S. attempts to implement policy ahead of AI innovation.
Key TopicsWhen FedRAMP was established over a decade ago, the focus was on managing the accreditation of emerging cloud infrastructure providers to support the initial migration of workloads. The baseline standard was FedRAMP Moderate, which addressed a "good amount" of security controls for less risky systems. However, Ross explains that increasing volumes of more sensitive workloads have moved to the cloud over time - including mission-critical systems and personal data. Consequently, agencies want to step up from moderate to the more stringent requirements of FedRAMP High to protect higher-risk systems. This includes only allowing High-cloud services to interact with other High-cloud applications.
The Evolution of Cloud Computing: "So right now, we're at the point where people are existing in thin clients that have access to targeted applications, but the back end compute power is kept somewhere else. It's just a completely different world that we're in architecturally." — Ross NodurftThe Future of Government Technology: Streamlining FedRAMP for the SaaS-Powered EnterpriseAccording to Ross, the COVID-19 pandemic massively accelerated enterprise cloud adoption and consumption of SaaS applications. With the abrupt shift to remote work, organizations rapidly deployed commercial solutions to meet new demands. In the federal government, this hastened the transition from earlier focus on cloud platforms to widespread use of SaaS. Ross argues that FedRAMP has not evolved at pace to address the volume and type of SaaS solutions now prevalent across agencies. There is a need to streamline authorization pathways attuned to this expanding ecosystem of applications relying on standardized baseline security controls.
High-level Security Controls for Sensitive Data in the CloudAddressing Data Related to Students and ConstituentsRoss states that as agencies move more sensitive workloads to the cloud, they are stepping up security controls from FedRAMP Moderate to FedRAMP High. Sensitive data includes things like personal HR data or data that could impact markets, as with some of the work USDA does. Willie gives the example of the Department of Education or Federal Student Aid, which may have sensitive data on students that could warrant higher security controls when moved to the cloud.
Ross confirms that is absolutely the case - the trend is for agencies to increase security as they shift more sensitive systems and data to the cloud. Especially with remote work enabled by the pandemic. So agencies with data related to students, constituents, healthcare, financial transactions etc. are deciding to utilize FedRAMP High or tailor Moderate with additional controls when migrating such workloads to ensure proper security and rights protections.
The Future of Government Technology: Navigating the Tradeoffs Between Cloud Innovation and Data SecurityAs Ross explains, FedRAMP High means you can only interact with other cloud applications that are also FedRAMP High. So there is segmentation occurring with more sensitive data and workloads being isolated via stricter security controls. However, he notes it is not a "bull rush" to FedRAMP High. Rather agencies are steadily moving in cases where the sensitivity of the data warrants it.
Willie then asks about the costs associated with these stricter cloud security authorizations, given even Moderate is expensive. Ross explains there are currently policy discussions underway about making FedRAMP more streamlined and cost-effective so that innovative commercial solutions can still sell to the government without having to completely re-architect their offerings just for these processes. The goal is balancing the accessibility of cloud solutions with appropriate security based on data sensitivity.
Modernizing Federal Government IT: "We need to stop requiring companies to have their own completely separate over architected environment. We want commercial entities to sell commercially built and designed solutions into the federal government." — Ross NodurftLaying the Groundwork: The AI Executive Order and the Future of Government TechnologyRobust Framework for Future Policy and Legal DevelopmentRoss states that the AI Executive Order is the biggest and most robust executive order he has seen. He explains that it attempts to get ahead of AI technology development by establishing a framework for future policy and legal development related to AI. Ross elaborates that there will need to be additional regulatory and legal work done, and the order aims to "wrap its arms around" AI enough to build further policy on the initial framework provided.
According to Ross, the order covers a wide range of topics including AI in critical infrastructure, generative AI, immigration reform to support the AI workforce, and government use of AI. He mentions the order addresses critical infrastructure like pipelines, hospitals, transportation systems and more. It also covers immigration policy changes needed to ensure the U.S. has the talent to advance AI. Additionally, it focuses heavily on government consumption and deployment of AI.
Mapping the Future of Government TechnologyNavigating the Future of Government TechnologyThe AI executive order tasks the Office of Management and Budget (OMB) with developing guidance for federal agencies on the safe and secure adoption of AI. Specifically, Ross states that the order directs the Federal CIO and other administration officials to establish rules that allow government consumption of AI in a way that protects safety and rights. Before writing this guidance, the order specifies that OMB must consider the impacts of AI on safety-critical infrastructure as well as rights like privacy and fairness.
Ross explains that OMB recently released draft guidance for public comment. He says this draft guidance contains several key components. First, it establishes AI governance requirements, directing every major federal agency to appoint a Chief AI Officer and create an AI council with agency leadership that will oversee adoption. Second, it mandates that agencies take inventory of existing AI use and develop plans detailing how they intend to utilize AI going forward.
Requirements for Agencies to Appoint a Chief AI OfficerAccording to Ross, a primary governance requirement in the OMB draft guidance is that all major agencies assign a Chief AI Officer to spearhead their efforts. Additionally, he notes that the guidance orders agencies to construct AI councils with membership spanning functions like IT, finance, HR and acquisition. Ross specifies that these councils will be led by the Deputy Secretary and Chief AI Officer of each department.
The Uncertain Future of Government TechnologyCollaboration, Prioritization of Assessments, Compliance, Monitoring and ValidationRoss highlights the need for collaboration between industry and agencies to address issues like prioritization, timing, specifics of compliance, attestation and who pays for and validates assessments. The order pushes the use of AI but lacks specifics that could slow adoption of widely-used technologies with AI. Ross notes this could introduce friction, slowing productive technologies when faster digital services are demanded. Better defining compliance pathways is needed to avoid nervousness using AI.
AI Ethics and Regulation: "You've got to run as close to live testing as possible, you've got to have human people factored into the decision-making engines." — Ross NodurftWhile embracing AI, the order does not detail how to facilitate adoption. Ross says this could cause confusion across agencies. His trade association ADI sees the need to add specifics around governance mechanisms to avoid inconsistencies. The lack of clarity risks friction and slowing AI incorporation, which Ross believes is imperative.
Balancing Innovation and Responsibility in Emerging TechnologiesDemand for a Digital Environment and the Importance of ObservabilityRoss states that there is a quick move towards a digital environment across all services, driven by demand from millennials, Gen X and Gen Z. He emphasizes that everything needs to have an app or digital access now to engage users. Ross then highlights how Dynatrace provides important observability of these new cloud-based architectures, allowing agencies to understand usage, interactions and performance. He argues this is essential to properly managing digital services.
Ross worries that the new AI executive order guidance lacks specifics around compliance, which risks creating friction in adopting widely-used technologies like Dynatrace that have AI components. He states there is uncertainty whether tools like Dynatrace must be inventoried and assessed under the new policy. If so, there are many open questions around prioritization, timing, specific compliance activities, and who pays associated costs. Ross emphasizes that this uncertainty could hinder cloud adoption without more clarity.
Responsibility and Control Over the Use of AI TechnologyRoss stresses that while AI technology enables incredible things, we have full control and responsibility over its uses. He states we must consider processes and safeguards that provide oversight and allow intervention over AI systems. Ross argues we cannot afford to deploy AI blindly, but highlights it is in our power to leverage these technologies to benefit humanity with appropriate guardrails.
Shaping the Future of Government TechnologyThe Future of Government Technology and Managing Change for Emerging FieldsRoss asserts today there is greater intention around anticipating risks from emerging technology compared to past eras. He advocates for building off switches and review processes that allow understanding and course correction around new innovations like AI. Ross states this considered approach is essential for nanotechnology, quantum computing and other exponentially advancing fields.
The Influence of Artificial Intelligence in Policy and Legal Development: "But artificial intelligence is now more than ever being built into everything that we do technologically." — Ross NodurftRoss disputes the concern that AI will replace jobs, arguing instead it will shift skills required by humans. He provides examples of comparable historical technology shifts requiring new expertise, like transitioning from horses to locomotives. Ross states AI moves job responsibilities in different directions rather than eliminating careers, necessitating learning new tools and approaches.
Establishing Processes and Organizational Structures for the Future of Government TechnologyRoss highlights how the AI executive order establishes agency governance bodies to oversee adoption. He details required personnel like Chief AI Officers that must review and approve AI use. Ross states these processes aim to identify risks in using innovations like AI while still encouraging adoption. He argues this organizational oversight is a new paradigm essential for emerging technologies.
About Our GuestRoss Nodurft is the Executive Director of the Alliance for Digital Innovation (ADI), a coalition of technology companies focused on bringing commercial, cloud-based solutions to the public sector. ADI focuses on promoting policies that enable IT modernization, cybersecurity, smarter acquisition and workforce development. Prior to joining ADI, Ross spent several years working with industry partners on technology and cybersecurity policy and several years in government, both in the executive and legislative branches, including Chief of the Office of Management and Budgets cyber team in the White House.
Episode LinksThe podcast currently has 85 episodes available.