Artificial Intelligence – CloudGuard AI https://cloudguard.ai Thu, 14 Aug 2025 14:13:01 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.2 /wp-content/uploads/2023/10/cloudguard-icon-50x50.png Artificial Intelligence – CloudGuard AI https://cloudguard.ai 32 32 Unpacking what Microsoft’s agentic AI announcements mean for cybersecurity in 2025 https://cloudguard.ai/resources/unpacking-microsoft-agentic-ai/?utm_source=rss&utm_medium=rss&utm_campaign=unpacking-microsoft-agentic-ai Mon, 27 Jan 2025 13:41:10 +0000 https://cloudguard.ai/?p=13629 At CloudGuard, we are always looking out for the trends shaping the future of cybersecurity.

One of the biggest announcements to catch our attention came from Microsoft’s Ignite 2024 conference where they introduced a concept called “Agentic AI.”

If you have not heard that term before, imagine an AI that does more than just follow instructions. It learns, adapts and makes decisions on its own to achieve set goals. In other words, it is an AI with a level of autonomy we have not really seen until now.

In this blog, I will walk you through what Microsoft announced, explain the idea behind agentic AI and dig into why it could matter so much for cybersecurity teams everywhere.

Microsoft’s agentic AI announcement at Ignite 2024

Chairman and CEO Satya Nadella speaks at Microsoft Ignite 2024.
Chairman and CEO Satya Nadella speaks at Microsoft Ignite 2024.

At Ignite 2024 Microsoft showcased how they are pushing AI beyond the traditional rule-based approach.

Their vision: systems that adapt in real time without needing constant human input. Instead of just following preset instructions these new AI “agents” can map out their own mini- goals, change tactics on the fly and keep learning as fresh data comes in.

What exactly was announced?


During the keynote Microsoft demonstrated prototypes blending Agentic AI into different IT environments.

One agent coordinated software patches across a huge global network making calls without someone hovering over it. Another dove deep into enterprise-scale infrastructure to detect vulnerabilities. It scanned multiple layers found weak spots applied fixes and learned from each step to perform better over time.

For more details you can check out Microsoft Ignite’s official site or their blog on Responsible AI.

Microsoft’s vision and goals


Microsoft wants these AI agents to be not just powerful but also ethically aligned.

The idea is not to build rogue machines that act against human values. Rather it is about freeing up security professionals from repetitive or overly complicated tasks so they can focus on what really matters.

They highlighted their commitment to principles like fairness, reliability, safety, privacy, security, transparency and accountability. In other words the human element still sets the tone and defines the boundaries.

Microsoft’s stance echoes their Responsible AI Principles.

Community reactions and early feedback

After Ignite the community response was a mix of excitement and healthy skepticism.

Many cybersecurity experts were intrigued by how these capabilities could speed up threat detection. Imagine cutting down response times from days to minutes because your AI can outpace attackers who are changing their methods all the time.

On the flip side there were concerns.

Some asked if cybercriminals would also use agentic AI escalating attacks to a new level. Others wondered if these systems could create unexpected vulnerabilities or leave human analysts feeling less in control.

Microsoft addressed these worries by stressing built-in safeguards and a strong ethical code but the debate is far from settled.

For more context on how similar technologies have been discussed you might want to check out resources like Forrester’s AI research or industry expert commentary on managing new cybersecurity tools.

Why AI matters for cybersecurity

Make no mistake. Agentic AI represents a turning point.

Threats are getting more complex and automated and we need defenses that can keep up. Adaptive learning and autonomous problem-solving are powerful capabilities. Still, using them responsibly means considering new ethical dilemmas, and ensuring humans stay firmly in the driver’s seat.

At CloudGuard, we think it is critical to keep an open mind stay involved in these discussions and get ready for a future where autonomous AI-driven cybersecurity might be closer than we think.

How AI Cybersecurity Can Reduce Your Security Operations Costs

What Is agentic AI? (and what it isn’t)

Lance Braunstein, head of Aladdin Engineering at BlackRock, and Judson Althoff, executive vice president and chief commercial officer at Microsoft, speak at Microsoft Ignite 2024.
Lance Braunstein, head of Aladdin Engineering at BlackRock, and Judson Althoff, executive vice president and chief commercial officer at Microsoft, speak at Microsoft Ignite 2024.

Today’s reactive AI vs agentic AI

Right now most advanced AI models wait for commands.

They are like super smart assistants that answer questions and follow instructions but never take the initiative. In contrast, Agentic AI can spot objectives on its own break them down into smaller tasks and proactively pursue them.

Defining Agentic AI

Agentic AI is more than a passive tool.

Take a corporate network security scenario. A reactive AI might run scans and hand you a list of issues. Agentic AI would detect suspicious activities, suggest fixes, implement patches and then keep an eye on how attackers respond.

It would adjust its approach learning as it goes. This goal-oriented and adaptive nature gives it a real edge.

What agentic AI isn’t

Agentic AI is not magic and it is certainly not a step toward sentient robots.

It does not have feelings or desires. It also does not remove humans from the picture.

Security professionals still set the big goals, define compliance rules and outline ethical boundaries. The AI works within those guidelines.

Humans shift from micromanaging to guiding, ensuring the AI’s actions align with the company’s core values.

Addressing Misconceptions

Some worry that Agentic AI will replace human cybersecurity experts.

Not so. Yes it can handle repetitive tasks or detection at a scale that would exhaust human teams but it still relies on our judgment and direction.

As Dr. Sarah Lin, a cybersecurity researcher at the University of Washington points out:

Agentic AI acts like a skilled problem-solver rather than a passive tool. Yet it still needs professionals to guide its objectives interpret results and make ethically sensitive decisions.

Expert Perspectives

John Martinez, a security consultant at Forrester noted:

Agentic AI can accelerate how we respond to threats but we must ensure it does not introduce new vulnerabilities or diminish human accountability.

This caution aligns with what we have seen, and is echoed in resources like our own incident response guides.

Practical Takeaways

For cybersecurity professionals, understanding agentic AI is not just about knowing the tech specs. It is about reshaping how we think about AI.

Instead of treating it like a tool that needs constant babysitting, we can see it as a partner working alongside us.

Your job will shift toward setting the objectives and ethical standards, and ensuring the AI’s actions reflect your organisation’s values.

The good, the bad and the ugly of agentic AI in cybersecurity

A stylized computer screen demonstrating the Trustworthy AI user interface
A stylized computer screen demonstrating the Trustworthy AI user interface

The good: Stronger defences and proactive security

  • Faster Threat Detection and Response: Agentic AI can spot and neutralise threats in record time, potentially turning days into minutes.
  • Adaptive defence mechanisms: Attackers evolve but Agentic AI can change its tactics just as fast, making it harder for bad actors to succeed.
  • Proactive vulnerability identification: Agentic AI can find and fix weak spots before attackers even get a chance to exploit them.

The bad: Potential misuse and dependency

  • Empowering attackers: Cybercriminals could deploy Agentic AI for their own malicious ends making attacks more adaptive and dangerous.
  • Detection challenges: AI-powered attacks will require next-level defensive measures and not everyone might be ready.
  • Over-reliance on AI: If you lean too heavily on agentic AI and it fails or is compromised your team could be caught off guard.

The ugly: Unintended consequences and ethical quandaries

  • Collateral damage: Agentic AI might shut down a network segment due to suspicious activity accidentally locking out legitimate users.
  • Ethical decision-making without humans: If the AI can act without asking first who is responsible if something goes wrong?
  • Escalation of the AI arms race: As defenders use agentic AI so will attackers leading to an endless cycle of one-upmanship.

Cybersecurity automation: The good, the bad and the inevitable | Sean Tickle, Littlefish

Finding the right balance

We do not need to shy away from agentic AI just because it can introduce risks.

Instead we should acknowledge these issues and work proactively to manage them. Clear guidelines regular audits and strong human oversight can mitigate many of these problems.

Organisations should train their teams to understand how to guide and supervise these AI tools effectively. 

20 other annoucements at Ignite 2024


Beyond the agentic AI announcement, Microsoft rolled out a variety of other innovations at Ignite 2024.

While these topics may not be directly related to agentic AI, they paint a broader picture of Microsoft’s trajectory in AI, productivity, cloud computing and resilience.

  1. A live cybersecurity challenge where participants tested their skills against simulated zero-day attacks. This spotlighted the importance of proactive security and fast responses.
  2. A redesigned interface for Microsoft’s AI Copilot that makes AI tools more accessible and user friendly.
  3. Enhanced automation features that let Copilot carry out specified commands. This cuts down on repetitive tasks and frees up time for more strategic work.
  4. Intelligent AI agents integrated into Microsoft 365 that help with scheduling drafting emails and managing documents more efficiently.
  5.  Specialised AI tools in SharePoint that assist with content organisation permissions and site management.
  6. A platform that allows teams to build train and deploy custom AI agents without heavy coding or AI expertise.
  7. Data-driven insights that help identify trends patterns and opportunities for better decision-making.
  8. A cloud integration for Windows enabling seamless switching between physical desktops and cloud-based Windows environments.
  9. Measures to improve Windows system stability reducing downtime and speeding up recovery from cyberattacks or failures.
  10. Localised Azure services placed close to regional data centers improving latency and helping meet compliance requirements.
  11. Security modules built into Azure’s infrastructure for stronger encryption and key management.
  12. A dedicated data processing unit designed to accelerate network and security workloads in the cloud.
  13.  Integrating NVIDIA’s advanced AI accelerators with Azure to handle more complex machine learning tasks.
  14.  High-performance Azure computing instances that deliver increased processing power for demanding workloads.
  15. A new way to manage and analyse large-scale databases within Microsoft’s Fabric ecosystem helping teams gain insights faster.
  16. A platform for building refining and scaling AI models in a single environment reducing complexity and accelerating innovation.
  17. Tools that let organisations tweak and refine AI models to meet their unique needs.
  18. A service to simplify the deployment and management of AI agents making it easier to roll out intelligent solutions.
  19. Built-in analytics that measure AI agent performance highlighting strengths and areas for improvement.
  20. A partnership aimed at bringing quantum computing capabilities into the Azure cloud to tackle problems that traditional computers struggle to solve.

Wrapping up

Chairman and CEO Satya Nadella speaks at Microsoft Ignite 2024. (Photo by Dan DeLong)
Chairman and CEO Satya Nadella speaks at Microsoft Ignite 2024. (Photo by Dan DeLong)

Microsoft’s reveal of agentic AI at Ignite 2024 is not just another AI story.

It signals a big shift in how we defend our digital worlds. At CloudGuard, we think it is crucial to understand what this technology can do and what it means for ethics responsibility and the future of cybersecurity.

The technology offers game-changing capabilities. It can speed up threat responses and adapt on the fly. Yet it introduces new challenges and moral dilemmas.

The role of security professionals will not vanish. Instead it will evolve into something more strategic and values-driven. By staying informed and engaged we can shape a future that leverages Agentic AI’s potential without losing sight of what matters most: trust accountability and good old-fashioned human judgment.

Final thought

How would you integrate agentic AI into your organisation’s cybersecurity strategy while making sure you maintain responsibility and strong ethical oversight?

]]>
Artificial Intelligence | CloudGuard AI nonadult
Azure Integration Services and AI: Key Learnings from Integrate 2024 https://cloudguard.ai/resources/integrate-2024-insights/?utm_source=rss&utm_medium=rss&utm_campaign=integrate-2024-insights Wed, 31 Jul 2024 07:00:15 +0000 https://cloudguard.ai/?p=11693 This year I once again had the pleasure of attending Integrate 2024 London, a conference which has always been important to me for both its technical content and unparalleled access to representatives from Azure Integration Services’ various Product Teams.

There were was a plethora of sessions from both the Microsoft delegates and mainstays of the Integration community alike but one session stood out in particular to me, one of Kent Weare’s, (Principal PM of Azure Logic Apps and all round stand up fellow).

His session was, quite frankly in my opinion, one of the most important sessions of the conference.

A man presenting a RAG pattern on a big screen

Kent’s session covered a lot of new features, but it was his AI Demo which was important, the broad strokes of what was demonstrated are as follows:

  • Firstly, he delved into how Integration Services can be combined with Generative AI in Azure OpenAI to provide more powerful integrations with low code solutions.
  • Secondly, he then showed how to implement the Retrieval and Augmentation Pattern (RAG Pattern) with Logic Apps. What would normally require a few hundred lines of Python was reduced to using about a dozen workflow designer shapes.
  • Thirdly, he then enhanced this demo by integrating these workflows into an Azure OpenAI language model using the new (and still in preview) Assistants Feature.
  • Lastly, using a simple web request client to call his language model and utilize those functions, demonstrating the back-end APIs for a specialized Co-Pilot experience tailored to his demo company.

Crucially I want to explain why his demo was so important.

From low-code to natural language processing

Ease of use

I have written some reasonably complex applications in the course of my career and I’m comfortable in creating both high-code and low-code solutions.

That said, having the option of building something with low-code which would ordinarily take me hours upon hours to do with High Code tools is well worthy of consideration when deciding what you’re going to use to build a given solution.

I’ll not harp on about this much longer, but ease of use democratizes the solutions to complex problems.

Using natural language to make requests

Kent just used natural language to make his requests.

This is critical!

To understand why, we need to take a step back from Co-Pilots for a moment and rewind time by about a decade.

“Since she’s powered by Bing, she understands the entire internet…”

Look familiar? Cortana was demo’d at Build 2014, when Chat Bots and Assistants were all the rage and were going to change the world. Let us not forget Cortana had contemporaries, Alexa, Siri et.al. All came with SDKs to build integrations to make them ostensibly “smarter”. What these SDKs actually accomplished was that the agents just became more capable, but definitely not smarter.

A man presenting in front of a big screen

There are two faults which fundamentally scuppered many of these chat bots:

  1. The agents would do what we said, not what we meant.
  2. Our interactions with them were still very robotic and procedural.

We’ll take a moment to explore these faults a bit further and show where modern LLM’s bring improvements over their ancestors.

The limitations of existing voice assistants and chatbots in everyday tasks

Let’s take the Alexa unit in my kitchen – What I Said vs What I Meant

I like to listen to music sometimes when I cook. I have an Alexa unit in my kitchen, and she’s configured to integrate with Spotify. However, I can’t just ask Alexa to play something from my Spotify list using casual language.

My query must be specifically constructed to achieve the desired end goal.

The meaning (semantics) behind my request is irrelevant because it is the content and structure (syntax) of my request that matters. Put it another way the content must be specifically formatted for Alexa to derive intent. Consider the following sample command.

“Alexa, from Spotify play my Best of Brian Fallon and The Gaslight Anthem playlist.”

This command is onerous to articulate and not natural, but it works. I would much rather ask.

“Alexa, play my Brian Fallon playlist”.

This is because Alexa must be specifically told to launch Spotify, and she’s not smart enough to know my Brian Fallon playlist unless a get its name exactly right. Content and Syntax, vs Semantics.

To be clear I know I know, the underlying semantics of the API calls between Alexa and Spotify are far more nuanced as to why my request is hard to perform, but non-the less, as a consumer it’s a not great experience.

Ordering food from chatbots – Procedural Conversations

Once upon a time, I was testing a chatbot for ordering food from a fast-food giant. To say it was a cumbersome experience would be an understatement.

I couldn’t say “Order a medium cheeseburger, fries, and a strawberry milkshake, hold the gherkins.”

I had to move through a workflow where my responses to prompts would be in a structured and sequential manner.

The prompts:

  1. What Burger would you like? “Cheeseburger.”
  2. What size would you like. “Medium.”
  3. Would you like any extras? “No.”
  4. What sides would you like? “Fries.”
  5. What size? “Medium” and so on and so on.

These experiences were not great, they’re slow, limited and they haven’t really in my opinion gotten much better as the years have passed. You’ll see a lot of these chat bot assistants online and they’re nearly always a poor replacement for a human being.

Transforming general-purpose LLMs into domain-specific Co-Pilots

Domain Knowledge Extensibility

Large Language Models (LLMs) can’t provide meaningful answers to questions on information they don’t know about. Or can they?

That’s what RAG is all about, pulling data into our models in a way in which they can then answer questions on that data, which they do elegantly.

To be clear, this is not my LLM regurgitating my data at me blindly, this is how my LLM can understand the semantics of that data and answer questions relating to that data with a high degree of accuracy. There’s a whole lot of vector mathematics and higher dimensional trigonometry involved which I won’t even pretend to understand in detail but in short RAG turns your data into vectors (a multidimensional array of floating-point numbers which plot a line). LLM’s turn your prompts into vectors and looks for comparable vectors to determine a response. The distance between 2 vectors determines how related the vectors are, with the shorter the better.

Action Extensibility Logic Apps and Azure Open AI Assistants

Azure Open AI Assistants allowed Kent to quickly refine his general purpose LLM into a domain specific Co-Pilot with what seemed little effort. By defining several Logic Apps and importing them as functions his LLM could retrieve data (using the RAG pattern) and even take actions in the real world based on natural language prompts.

Rather than his Co-Pilot just talking a good game it could go out and do it! The subtle but important part of this is that the model understood how and when to use the Functions it had been provided and didn’t just use them blindly.

I left that session deep in thought.

Kent had shown how easy it is to build domain specific agents/co-pilots/chat bots using PAAS and SaaS components in a way which could be tailored to any specific problem domain. This resulted in a Co-Pilot that was actually useful!

Not a clunky chat-bot experience we would have had to assemble with blood sweat and tears in years past for a passable user experience.

I was resolved to build one of my own.

Woke up and Chose to Hack

So, fresh from the conference (well relatively fresh, I had to take some days off on account of accidentally breaking my foot on the return trip!) I was eager to strike while the iron was hot and build a Co-Pilot.

The timing could not have been better as there was a Microsoft AI Hackathon coming up, a summer sequel to the Spring Hackathon I recently posted on LinkedIn about (a shout out to Robin Lester for organising these Hacks).

After making a case to our CTO, Javid Khan, who to be fair didn’t need much of a case to be made, CloudGuard formed a team for the Hack:

Myself, Jonathan Hartdegen and, Yakub Desai.

With Jav’s blessing we were off to the races!

Well, that Hack was last week, you might be wondering. How did it go?

In short, the Summer Hack was a blast. We each earned another badge to add to our nascent collections. Proud to show mine here.

An animated rocket going into space

We achieved nearly all our stretch goals in creating our Co-Pilot and I’ll be continuing to work on and improve that prototype in the near future.

Slightly longer story, there was some head bending concepts in there We went off script a bit (we ended up using both Semantic Kernel AND Azure OpenAI Assistants), but we achieved our goals and learned a lot about extending LLM’s because of it.

I am not going to go into depth on the Hack in this post (I will in the near future).

Wrap Up!

I wanted to conclude on a few points to which I hope I’ve given some evidence to support:

  1. If you’re a Chief Technology Officer or Senior Manager with a training budget.
    Send your team to conferences! Why? To learn new things! To ideate! To Network! I am now 2 for 2 for Integrate attendances in the last 2 years where I have come back and built something because of the ideas and tools I have been shown at a conference.
  2. If you’re a software professional.
    Make a case to go to conferences to your manager and make a case for doing some Hack events! Build, experiment, fail, succeed, learn! Hacks are like pressure cookers which make diamonds and as I’m rediscovering in my “renaissance” of Hackathon attendance they’re great for pushing yourself to new heights.

That about sums up this post. Thanks for reading all. As promised, I’ll post again soon with some more technical content on my working in Azure AI in the integration space but until then, take care.

]]>
Artificial Intelligence | CloudGuard AI nonadult
How AI Cybersecurity Can Reduce Your Security Operations Costs https://cloudguard.ai/resources/ai-reduce-security-costs/?utm_source=rss&utm_medium=rss&utm_campaign=ai-reduce-security-costs Fri, 16 Feb 2024 15:00:52 +0000 https://cloudguard.ai/?p=9743 AI threats are advancing by the hour, orchestrated by sophisticated individuals and groups worldwide. These threat actors utilise AI to launch targeted attacks on businesses for various motives, including financial gain and political reasons.

The growing trends of AI-driven phishing techniques and impersonation tactics has heightened the need for organisations to integrate advanced, proactive strategies into their cybersecurity posture.

Using AI is not just an option but a necessity. Without utilising advanced techniques for both offense and defence, businesses risk falling behind in addressing the complex challenges presented by changing cyber threats.

Modern Security Operations Challenges

Let’s be honest. Humans have their limits. We live in a world where security operations typically rely heavily on human interactions. Security Operations Centers (SOCs) house teams of Security Analysts tasked with monitoring and responding to cyber threats. It’s not sustainable, efficient or effective.

Relying solely on human capabilities for monitoring and responding to the sophisticated and ever-changing landscape of cyber threats presents several challenges. Today, the complexity of cyber threats far exceeds the capabilities of human analysts.

Security Analysts follow a manual and intensive process when responding to security events. Upon detection, they initiate a series of scripted actions, known as Standard Operating Procedures (SOPs), to investigate and mitigate the threat. These SOPs are essentially libraries of predefined steps to be taken in response to specific scenarios.


This manual investigative process is time-consuming, with analysts spending hours delving into the details of each event. As a result, a SOC team, even one operating 24/7, can become a factory of human-intensive tasks. The sheer volume of events, multiplied by the number of customers and the duration of threats, creates an environment prone to human errors and inefficiencies.

 

Common challenges include fatigue-driven errors, delays in investigation, and the risk of crucial details being overlooked. Human limitations in terms of working hours further exacerbate these challenges, leading to suboptimal operational efficiency and potential negative impacts on the quality of service delivered to customers.

The Benefits of AI and Automation

In addressing the challenges faced by SOC teams, the introduction of AI and automation significantly improves the capabilities of security operations. The once manual and time-consuming processes undertaken by Security Analysts can be automated to enhance efficiency and reduce response times.

Imagine an event triggering an output. Traditionally, a Security Analyst would follow a predefined script, executing a series of steps outlined in a Standard Operating Procedure (SOP). This workflow can be automated. The automated system replicates the analyst’s behaviour, executing the SOPs in response to the event trigger.

Automated cybersecurity holds the power to expedite the entire investigative process. What might have taken hours for a human analyst to complete can be achieved in minutes, or even seconds, with automation. The automation system can efficiently handle routine tasks and decision-making processes, significantly reducing mean time to resolve (MTTR).

 

While automation can handle most tasks, there may be scenarios where human intervention is necessary. In these cases, the automated system can seamlessly hand over the information and context to a human analyst. This ensures that the analyst can focus on the nuanced and complex aspects of the investigation, rather than mundane and repetitive tasks.

By leveraging AI and automation, SOC teams can maximise their value by concentrating on higher-order tasks and strategic decision-making. The result is a more streamlined and efficient workflow, leading to quicker issue resolution, improved mean time to resolve rates, and ultimately, imrpvoed customer satisfaction.

How AI Reduces Security Operations Costs

The role of AI in security operations allows for a critical perspective on cost reduction. Unlike traditional methods where SOPs are manually created for every new scenario, AI operates through learning on the fly. This self-learning ability ensures that as new, unprecedented events occur, the AI system adapts and evolves without the need for manual intervention.

True AI doesn’t require analysts to create specific procedures for each unique event. Instead, it learns from the behaviour observed during the event, eliminating the need for repeated training. In essence, the AI becomes self-sufficient in handling scenarios it has encountered before.

This self-training capability allows for rapid response times. When a similar event occurs in the future, the AI can autonomously and efficiently execute the learned processes, drastically reducing the time needed for investigation and resolution.

Moreover, the cost-saving benefits of AI extend beyond operational efficiency. In a business context, the introduction of AI allows for the creation of a leaner SOC team that heavily leverages automation.

By reducing the need for a full-fledged SOC team, businesses can significantly cut costs while enhancing operational effectiveness.

AI’s ability to handle routine tasks means that human analysts can focus on more complex, strategic, and value-added activities, contributing to a multifaceted improvement in both operational efficiency and overall cost-effectiveness.

Scalability and Future-Proofing

With a well-implemented approach, businesses can focus on expanding the modular architecture of their automation without being constrained by concerns related to human resources within growth plans.

The scalability achieved through AI is not just about adding more people to drive the expansion but revolves around investing in the scalability of the automation framework.

In contrast to a flat architecture that might hinder scalability, the emphasis is placed on strategic planning to create a system that can effortlessly scale out. The importance of scalability is a key consideration when adopting an AI-based strategy for cybersecurity posture.

However, there is also an ethical dimension to scalability that must be considered. Rather than advocating for indiscriminate role displacement, businesses must consider a more nuanced approach.

Rather than cutting roles, businesses should repurpose their teams, creating a learning environment that contributes to the AI strategy.

This approach is not only more ethical but also more rewarding, creating a collaborative partnership between human expertise and AI capabilities. In essence, the focus is on achieving scalability while future-proofing the workforce through strategic repurposing and upskilling.

Conclusion

Striking the right balance between AI, automation, and human expertise in cybersecurity operations is essential. AI is a powerful tool that businesses can leverage to help reduce operational costs and allow security teams to demonstrate their value through higher order tasks and strategic decision making.

Gone are the days where Security Analysts spend hours manually investigating a single event. However, the unfortunate trend of tech brands using AI buzzwords for marketing can cause confusion among decision-makers as it creates the misconception about the ease of deploying comprehensive AI solutions.

CloudGuard’s approach to combatting threats combines AI for intricate threat analysis, automation for handling mundane tasks, and human involvement for contextualising and refining the outcomes – offering businesses a comprehensive cybersecurity solution.

]]>
DEMO: The power of automated threat intelligence nonadult
Will AI Regulation Harm Cybersecurity and Help Hackers? https://cloudguard.ai/resources/ai-regulation-harm-cybersecurity/?utm_source=rss&utm_medium=rss&utm_campaign=ai-regulation-harm-cybersecurity Fri, 15 Dec 2023 11:08:10 +0000 https://cloudguard.ai/?p=9551 Adopting and leveraging the advantages of AI is accelerating rapidly. Questions surrounding the potential impact of new AI regulations on cybersecurity innovation have sparked discussions within the industry. I was asked about this on our recent webinar about the 2024 threat landscape. In this piece, I aim to provide further analysis of this nuanced landscape, drawing upon my experiences as a seasoned professional in the cybersecurity sector.

The dual ecosystems

Let’s delve into the dichotomy I highlighted in my recent comments—the existence of two distinct ecosystems within cybersecurity. On one side, we confront well-funded entities like Octo Tempest and Scattered Spider. We also have a few nation-state-supported entities receiving over $400 million, as well as ransom money revenue. The ability of these entities to independently propel the creation of AI tools, including quantum computing, raises concerns about how this will challenge IT security, and how global regulations needs to consider guardrails and regulations on their trajectory.

The second ecosystem, in contrast, comprises a diverse array of cybersecurity entities, ranging from innovative startups to established companies dedicated to defending against evolving threats. This ecosystem is characterised by agility, adaptability, innovation, and a commitment to pushing the boundaries of critical thinking.

an image of futuristic white humanoid robot facing off against a futuristic black humanoid robot

Unlike the first ecosystem, which is marked by substantial funding from nation-states, this second ecosystem thrives on a combination of ingenuity, collaboration, and a shared commitment to cybersecurity excellence. The effectiveness of nefarious activities and targeting of specific cohorts and industries is escalating considerably. As I discussed, this is unlikely to decline in 2024.

Yet there is a sensitivity to ensuring appropriate but adaptive global collaboration on guardrails and guidelines. The challenge lies in ensuring that AI regulations do not inadvertently stifle the innovation or constrain development pathways. We can learn some really powerful lessons from the hacker mindset in “good enough”, critical process thinking, and silo exploitation to respond more effectively to emerging cyber threats.

The regulatory conundrum

We have observed the global challenges on freedom of speech and data privacy, and how this can reduce our ability to understand and more effectively protect others. The initial question that arises is: to what extent do AI regulations need to influence the capabilities of these well-funded actors?

With geo-political challenges, the likelihood of majority global alignment is unlikely. Without this, how effective will any regulation and protection afforded therefore be? As AI and quantum computing accelerate digital transformations, we need to be ready and prepared for significant evolution at a greater globally collaborative level. This must be supportive of positive innovation and developments yet offer greater levels of nation-backed individual and commercial protections.

The crucial role of international collaboration

an image of two people shaking hands in front of various country flags

Moving forward, I underscore the importance for enhanced international collaboration in tackling cybersecurity challenges, as well as combined commercial and political entities. The interconnected nature of cyber threats demands a consolidated response that transcends national borders. Guardrails at the country level will prove inadequate in the face of cybercriminals who operate seamlessly across borders. Strengthening collaboration needs to be backed by consistent political changes, which are supported by globally-informed policies.

The balancing act of policies in AI regulation

My concern is centred around the potential consequences of more stringent policy implementation that simply is not agile enough. A draconian approach, I argue, will hinder the very innovation and acceleration necessary for effective cybersecurity, AI and quantum computing – and the benefits this will bring.

Striking the right balance is key — regulations must adapt to the dynamic threat landscape without stifling the agility required to combat emerging challenges. Quantum computing technology may need licencing in the same way patents and drug developments are to regulate usage and purpose. AI developments will need increased privacy assurance testing and security validation. There also needs to be greater transparency from the tech companies as the personal data collected and analysed be that directly or indirectly could be acquired or exfiltrated without consent.

Empowering cyber communities

In my opinion, the power lies within cyber communities, governments, and expert cyber agencies. These collective communities, driven by a shared purpose to make the world a better place through collaboration and innovation, can harness the accelerating technologies of AI and large language models to strengthen cybersecurity.

the letters AI coming out of laptop screen

The delicate balance involves enabling these communities to leverage AI while ensuring responsible and ethical use. Ensuring the messaging is understood, continually evolves, and is adopted is a crucial part of future success.

Navigating the tightrope

There has always been a delicate balance between regulatory measures and fostering an environment conducive to innovation in the cybersecurity sector.

AI regulation has taken too long to agree and implement, let alone evolve. This balance is critical in ensuring the ongoing effectiveness of our cybersecurity efforts. Restricting innovation runs the risk of leaving us ill-prepared to face the ever-evolving tactics employed by cybercriminals.

Worst still, it puts businesses and individuals who cannot afford or do not understand how to improve their protection at greater levels of risk.

Innovation as a necessity

Let’s be clear—cybersecurity innovation is not a luxury but an imperative in today’s digital landscape. Our AI-powered cybersecurity solutions, such as the MXDR (Managed Extended Detection and Response), demonstrate the innovation necessary to stay ahead of sophisticated threats. Technology is going to progress much faster, defence mechanisms must evolve in tandem. AI regulation cannot hold that back.

introducing cloudguard's new mxdr platform

Global unity in cybersecurity

The call for international collaboration is more than a mere suggestion or wishful thinking; it is a fundamental recognition that cybersecurity is a collective responsibility.

There must be concerted, multi-national alignment to reduce weaponisation tactics and activities in the cyber threat landscape. No one entity or nation can stand alone against the rising tide of cyber threats. Collaboration involves not only sharing intelligence but also aligning policies globally to create a unified defence against adversaries. Learn fast. Then adapt policies and regulation from this continual learning. We need to make it part of the fabric of lives to ensure those that are most vulnerable, have access to protection.

The future cyber landscape

Looking ahead, I hope for a future where cybersecurity evolves alongside the advancements in AI and automation. However, this future hinges on the careful calibration of regulatory measures and collaboration. While oversight is necessary, a balanced approach—one that empowers the cybersecurity industry to proactively address and adapt to emerging challenges—is crucial.

My final thoughts on AI regulation

My insights provide a firsthand view of the intricate relationship between AI regulation and cybersecurity innovation. The delicate balance between regulatory measures and fostering innovation is vital to the continued effectiveness of our cybersecurity efforts.

As we navigate the evolving landscape of cyber threats, a consistent, collaborative, and more globally-informed learning approach, coupled with a commitment to innovation, will be crucial in safeguarding our digital futures.

Striking this balance requires a nuanced understanding of the challenges and opportunities that lie at the intersection of AI and cybersecurity—an intersection that defines the future of our industry.

To learn more about AI regulation, and the 2024 threat landscape as a whole, watch my webinar on demand. Here’s a snippet to get you started.

]]>
What is Microsoft Copilot? 6 Things Business Leaders Must Know https://cloudguard.ai/resources/what-is-microsoft-copilot/?utm_source=rss&utm_medium=rss&utm_campaign=what-is-microsoft-copilot Mon, 14 Aug 2023 11:11:51 +0000 https://cloudguard.ai/?p=2948 Microsoft has introduced a game-changing innovation that promises to redefine the way businesses operate. Microsoft Copilot, an AI-powered tool integrated into the Microsoft 365 suite, has the potential to revolutionise productivity, streamline tasks, and enhance collaboration. As IT and business leaders, it’s essential to grasp both the advantages and potential drawbacks of this new tech. Here, I’ll delve into the intricacies of Microsoft Copilot, offering a balanced overview to help you make informed decisions.

Article quick links

What is Microsoft Copilot?

a prompt being entered into Microsoft copilot in Word
Source: Microsoft

Imagine an AI assistant that can generate documents, analyse data, summarise meetings, and even draft emails—all at your command. That’s the essence of Microsoft Copilot. Launched in March 2023, Copilot is designed to assist users across various Microsoft applications, such as Word, Excel, PowerPoint, Outlook, and Teams. By harnessing the power of AI and natural language processing, Copilot aims to enhance efficiency, creativity, and collaboration within the workplace.

Six Critical Things to Know About Microsoft Copilot

Now you have an understanding of what Microsoft Copilot. Here’s 6 things you must know before it becomes part of your business.

1. Limited availability and rollout strategy

Microsoft Copilot’s introduction has been carefully managed. Therefore, access is initially granted to select large enterprise clients. The goal of this phased rollout is to gather valuable user feedback, address potential issues, and refine the technology before broader availability. As of now, an “invited” list of around 600 customers has access, and a general release is anticipated in the near future, likely by early 2024. This cautious approach reflects Microsoft’s commitment to delivering a polished and effective tool that aligns with user needs and expectations.

2. The evolution from ChatGPT to Microsoft Copilot

Microsoft Copilot is built upon the foundation of ChatGPT, the AI language model developed by OpenAI. However, Copilot goes beyond mere text generation and understanding. It’s a multifaceted AI tool that leverages deep learning and natural language processing to assist users in various tasks. From generating code snippets to composing emails, creating presentations, and analysing data, Copilot’s capabilities are a significant advancement over its predecessors. Its integration into Microsoft 365 applications makes it a versatile and indispensable tool for enhancing productivity and creativity.

3. Seamless integration into Microsoft 365 Suite

One of the most compelling aspects of Microsoft Copilot is its seamless integration into the Microsoft 365 suite. Users will find Copilot’s functionalities embedded within the interfaces of applications like Word, Excel, PowerPoint, Outlook, and Teams. This integration ensures that Copilot’s assistance is available across different tasks and contexts. Whether you’re crafting a document, analysing data, or collaborating with team members, Copilot aims to provide relevant and context-aware suggestions, enhancing the overall user experience.

4. Empowering user productivity across applications

Microsoft Copilot’s potential to enhance user productivity is substantial. For instance, within Word, Copilot can leverage information from OneNote to generate comprehensive proposals tailored to specific needs. It can even suggest visual elements that align with past documents, streamlining the creation of visually appealing content. In Excel, Copilot shines in simplifying complex data analysis tasks. It can help identify trends, generate graphs, and perform intricate calculations, enabling users to extract insights from data more efficiently. By automating repetitive and time-consuming tasks, Copilot allows users to allocate their time and skills to more strategic activities.

5. Balancing suggestions with human judgment

Microsoft copilot generating a swot analysis
Source: Microsoft

While Microsoft Copilot’s suggestions are valuable, they’re not perfect. Users must use their judgment to determine the relevance and accuracy of Copilot’s recommendations. This is especially crucial for tasks that involve subjective decisions, creative content, or nuanced context. Copilot’s assistance serves as a valuable resource that can accelerate processes and spark creativity. However, the final responsibility for content quality and accuracy ultimately rests with the user. Striking the right balance between leveraging AI capabilities and applying human expertise will be essential for maximising the benefits of Copilot.

6. Data quality, privacy, and security considerations

Microsoft Copilot’s effectiveness heavily relies on the quality of the data it interacts with. The accuracy of its suggestions and insights hinges on the accuracy, completeness, and relevance of the underlying data. Organisations need to prioritise data hygiene, ensuring that the data used by Copilot is accurate, up-to-date, and representative of the tasks at hand.

Furthermore, the access Copilot has to sensitive internal data raises privacy and security concerns. Organisations must apply robust security measures to protect proprietary information and ensure compliance with data protection regulations. Establishing clear guidelines on data usage, storage, and access rights will be essential to build trust and mitigate potential risks associated with data handling.

Pricing and future outlook

As of now, Microsoft has announced a premium of $30 per user per month for access to Microsoft 365 Copilot. This pricing strategy reflects the substantial investment Microsoft has made in developing this AI technology. While the cost may seem significant, the potential gains in productivity and efficiency could justify the expense for forward-thinking organisations.

Looking ahead, the integration of AI tools like Copilot into everyday workflows is an indicator of the evolving nature of business operations. As AI technology continues to advance, Copilot is likely just the beginning of a new era. A time where AI-driven assistance becomes an indispensable part of our work lives.

Understanding Microsoft Copliot: final thoughts

Microsoft Copilot represents a significant leap forward in AI-driven productivity tools. As IT and business leaders, it’s essential to recognise both the potential benefits and challenges that come with its adoption. While Copilot has the capacity to streamline tasks, enhance collaboration, and boost efficiency, its successful implementation requires careful consideration of data quality, security, and employee training. As the technological landscape continues to evolve, embracing innovations like Copilot may be the key to staying competitive and agile in the modern business world.

]]>
AI Threat Intelligence: No longer something of the future https://cloudguard.ai/resources/ai-no-longer-something-of-the-future/?utm_source=rss&utm_medium=rss&utm_campaign=ai-no-longer-something-of-the-future Mon, 23 Jan 2023 11:56:09 +0000 https://cloudguard.ai/?p=1259 AI threat intelligence is here. It can no longer be denied. Find out what this means for the future of cybersecuity defences.

Machine Learning As Our First Line Of Digital Defense 

Machine learning is a type of artificial intelligence that allows computers to evaluate data and learn its meaning. The goal of combining machine learning and threat intelligence is to encourage users to find vulnerabilities faster than humans can and stop them before they cause more damage. Furthermore, conventional detection technologies invariably generate too many false-positive results due to a large number of security threats.

Machine learning can reduce the number of false positives by analyzing threat intelligence and condensing it into a smaller subset of features to watch for.

According to a global advanced threat intelligence consultant, artificial intelligence is becoming more important in deterring, detecting, and resolving cyber-threats as the evolution of attacks adapts and adversaries function in well-organized, highly skilled organizations.

The Security Threat Of Today Has Become An Industry Of Its Own

Many of today’s adversaries operate in large networks, relying on a “crime-as-a-service” business model that involves hundreds of people disseminating threats for a commission. Threat actors are using automation as a weapon to extend their reach. As a result, having A.I.-enabled structures in place to sift through massive amounts of security threats and react promptly becomes even more critical.

Machine learning-based AI threat intelligence products work by taking inputs, evaluating them, and generating results. Machine learning’s inputs for detection systems include threat intelligence, and its outputs are either alerts implying attacks or computerized actions that stop attacks. If the threat intelligence contains errors, it will provide “bad” details to the attack tracking tools, resulting in “bad” outputs from the tools’ machine learning techniques.

The Magic Of AI Threat Intelligence

There’s too much data and not enough time. Because of this, as well as the high cost of labor, machines have been at the frontline of cyber defense for nearly 50 years. It’s also why cybersecurity providers and consumers continuously leverage major innovations in software design, machine learning, and artificial intelligence (AI).

In contrast to the human brain, none of the other AI cyber technologies are completely autonomous or otherwise dubbed “intelligent.” Instead, they use complex algorithms and massive amounts of computing power to ‘intelligently’ process data. But that hasn’t stopped AI from becoming more prevalent in cybersecurity.

Cybersecurity: AI vs. Human Beings

AI and machine learning play a key role on both sides of the cybersecurity battle, allowing attackers and defenders to operate at unprecedented speeds and scales.

On the assault side, the rise of so-called “adversarial AI” has included relatively simple machine learning algorithms that have been used to disastrous impact in spear-phishing attacks. The human cyber attacker can use effective social engineering tactics with a high probability of winning and almost no effort by extracting open-source intelligence and studying communications obtained from a corrupted account in a computerized and ‘intelligent’ manner.

DeepFake attacks, which use AI to emulate individuals’ voices and visual appeal in audio and video files, are another example. IBM’s DeepLocker pilot project is one of many demonstrating how artificial intelligence will speed up the development of advanced malicious software.

Threat Intelligence with AI

Artificial intelligence and machine learning are essential for effective threat intelligence in various aspects: coping with massive amounts of data and guaranteeing that the data is current.

Volumes are massive, and they’re only getting bigger. Without a sophisticated software suite, processing data to be used in real-time, making decisions is impossible. Sensors that use algorithms, sinkholes, and phishing sites can greatly increase threat data exploration and classification and peruse through it all at a different speed to identify unusual behavior.

Adding To Human Intelligence And Experience

We know that cyber skills are in high demand worldwide, with up to 3.5 million job openings unfilled right now. This adds to the difficulty of implementing an AI-driven cyber strategy that requires little human intervention.

Human analysts are more than just supervisors of computerization in good security threats. It sees the value-added knowledge of knowledgeable professionals who can break the mold, think creatively, and add context to the ‘almost-finished product delivered merely through AI and machine learning processes.

Another of AI’s achievements in cyber defense is mimicking applicable scenarios, which requires human/machine collaboration. Because of their capacity to assist, prevent, and detect new attacks, these technologies are becoming increasingly important in the ethical hacking toolkit.

Conclusion

While AI is becoming more prevalent in both cyber-attack and defense, neither side achieves their goals when they entirely depend on it. In the same manner that threat actors benefit the most when they combine human intelligence with machines’ incredibly advanced logic and industry, security teams have found that this is the best formula.

Nothing, at least not yet, compares to the unique ability of people to think. Only people can add the final 10% – the missing link in the chain that ensures the whole makes perfect sense – and make the kinds of critical decisions that corporate leaders would rather not delegate to a computer. They form the best possible team when they work as a team.

]]>
Mastering Azure Sentinel: A Comprehensive Guide https://cloudguard.ai/resources/what-is-azure-sentinel/?utm_source=rss&utm_medium=rss&utm_campaign=what-is-azure-sentinel Thu, 16 Sep 2021 10:03:58 +0000 https://cloudguard.ai/?p=1199 Azure Sentinel is a SIEM (Security Information and Event Management) solution, designed to empower organisations with advanced threat detection and proactive security measures.

Infused with cutting-edge Machine Learning (ML) capabilities, Azure Sentinel stands out by offering robust, built-in analytics for the most common threats.

This article will guide you through understanding Azure Sentinel, its key features, and how it can transform your security operations.

What is Azure Sentinel?

Understanding the Basics

Azure Sentinel, one of the most sophisticated SIEM solutions available, uses advanced ML to provide deep analytics for threat detection and response.

microsoft sentinel overview page
Screenshot of Overview page in Microsoft Sentinel

Note: It was announced at Microsoft Ignite 2021 that Azure Sentinel was being renamed to Microsoft Sentinel. Read this release by Microsoft’s Sonia Cuff.

Its capabilities extend to data experts within organisations, enabling the creation of custom machine learning models to address unique customer threats.

By using Azure Sentinel, you gain a nuanced understanding of threat behaviors, allowing you to focus on solving problems and enhancing customer security rather than merely identifying issues.

Key Features of Azure Sentinel

Azure Sentinel connects seamlessly with a variety of data sources across your enterprise. These sources include users, devices, datasets, applications, and information from multiple tenants and clouds. This is done via data connectors.

There are out-of-the-box connectors, which are pre-built by Azure and easily connect to common data sources like Office 365 and Azure Active Directory. Custom connectors allow you to connect to other data sources not covered by the pre-built options, letting you tailor the data collection to your specific needs. This ensures that all relevant data can be analysed by Azure Sentinel.

azure sentinel content hub

As a cloud-native solution, Azure Sentinel alleviates the burden on your security operations team by eliminating the need for infrastructure monitoring and maintenance.

Additionally, its cost-effectiveness sets it apart from other SIEM tools; you only pay for the data analyzed, with billing managed through the Azure Monitor Log Analytics workspace.

Azure Sentinel and AI: Enhancing Threat Detection

Leveraging AI for Real-Time Threat Assessment

Security analysts face immense pressure when sifting through countless alerts.

Azure Sentinel addresses this challenge by using scalable machine learning techniques to correlate millions of low-fidelity anomalies, presenting only the most critical high-fidelity threats.

Security incidents in Azure sentinel
Screenshot of security incidents in Azure Sentinel prioritised by severity

This approach allows you to extract valuable insights from extensive security data, quickly identifying threats such as a breached account used for ransomware deployment.

Investigating and Hunting Suspicious Activities

Azure Sentinel offers a graphical, AI-based investigation process that significantly reduces the time needed to understand the scope and impact of an attack.

threat investigation in Azure Sentinel
Screenshot of threat investigation in Azure Sentinel

This unified dashboard enables you to visualise the attack and take appropriate actions swiftly. Proactive threat hunting is another crucial aspect, facilitated by Azure Sentinel’s hunting queries and Azure Notebooks.

These tools help you automate and optimise your security assessments, making your SecOps team more efficient.

Automating Threat Response

Automation is key to managing recurring threats efficiently.

Azure Sentinel includes built-in automation and orchestration features, allowing you to create predefined or custom playbooks to respond to threats promptly.

architecture of automated response in azure sentinel
Architecture of automated response in Azure Sentinel

Automated response works by using pre-defined rules and playbooks to automatically take actions when specific security threats are detected.

For example, if an unusual login is detected, Azure Sentinel can automatically trigger a playbook that blocks the user’s account, sends an alert to the security team, and logs the event for further analysis. This helps in quickly addressing threats without manual intervention, saving time and improving security efficiency.

By automating mundane tasks, you can focus on more complex security challenges, ensuring a robust defense against persistent threats.

Deep Dive into Azure Sentinel’s Fusion Technology

What is Fusion Technology?

Azure Sentinel’s Fusion technology combines low- and medium-severity alerts from both Microsoft and third-party security products into high-severity incidents using machine learning.

This results in low-volume, high-fidelity, and high-severity incidents, designed to provide a clearer picture of your security landscape.

How Fusion Enhances Security Operations

Fusion technology enables Azure Sentinel to track multi-stage threats by identifying patterns of abnormal behavior and malicious transactions across different phases of an attack.

Fusion rule types in microsoft sentinel
Screenshot of multistage attack detection in Azure Sentinel

This detection method triggers incidents based on these patterns, making it easier to spot and respond to sophisticated threats.

By reducing false-positive rates, Fusion technology ensures that your security team can focus on genuine threats, improving overall security posture.

Practical Implementation: Using Azure Sentinel in Your Organisation

Setting Up Azure Sentinel

To get started with Azure Sentinel, you need to create an Azure account and set up a Log Analytics workspace.

searching for Microsoft Sentinel in Azure portal
Screenshot of searching for Microsoft Sentinel in Azure portal

selecting your workspace in micrsoft sentinel
Screenshot of choosing your workspace to deploy Azure Sentinel

Once your workspace is ready, you can connect various data sources, including Azure services, on-premises systems, and third-party solutions. This is done via the Content Hub.

Azure Sentinel provides several connectors to facilitate this integration, ensuring comprehensive data coverage.

Customising Machine Learning Models

One of Azure Sentinel’s standout features is its ability to customise machine learning models to fit your specific needs.

Building custom analytics rule with ML results
Building custom analytics rule with ML results in Sentinel

By leveraging the built-in ML capabilities, you can create models tailored to detect threats unique to your environment.

This customisation ensures that Azure Sentinel adapts to your security requirements, providing a personalised and effective defense mechanism.

Automating Response with Playbooks

Automation is crucial for efficient security operations.

Azure Sentinel allows you to create and implement playbooks that automate responses to specific threats. These playbooks can be predefined or custom-made, depending on your organisational needs.

creating a playbook in azure sentinel
Screenshot of creating a playbook in Azure Sentinel

Creating a playbook in Azure Sentinel is straightforward:

  1. Access Playbooks: In the Azure Sentinel portal, navigate to the “Playbooks” section under the “Configuration” area.
  2. Create New Playbook: Click “Add” to create a new playbook. This opens the Logic Apps Designer.
  3. Design Workflow: Use the Logic Apps Designer to drag and drop actions and triggers. You can automate responses such as sending alerts, blocking users, or gathering additional data.
  4. Save and Test: Once your workflow is complete, save the playbook and test it to ensure it works as expected.

Playbooks help automate responses to security threats, enhancing efficiency and consistency in your security operations. For more details, visit the Azure Sentinel Playbooks documentation.

By automating routine tasks, you can ensure a swift and consistent response to incidents, minimizing the impact of security breaches.

Conclusion

Azure Sentinel is a powerful, cloud-native solution for detecting, investigating, and responding to security threats.

Its advanced machine learning capabilities and seamless integration with various data sources make it a comprehensive tool for modern security operations.

By implementing Azure Sentinel, you can improve your security posture, reduce the burden on your security team, and focus on proactive threat management.

Embrace Azure Sentinel to safeguard your organisation and stay ahead of emerging threats.

]]>
How to get started with Azure Monitor Log Analytics nonadult