#FactCheck: Viral AI video claims Iran has destroyed Israel in an airstrike
Executive Summary:
A video is circulating on social media claiming to be footage of the aftermath of Iran's missile strikes on Israel. The video shows destruction, damaged infrastructure, and panic among civilian casualties. After our own digital verification, visual inspection, and frame-by-frame inspection, we have determined that the video is fake. The video is just AI-generated clips and not related to any incident.

Claim:
The viral video claims that a recent military strike by Iran resulted in the destruction of parts of Israel, following an initial missile attack launched by Iran. The footage appears current and depicts significant destruction of buildings and widespread chaos in the streets.

FACT CHECK:
We conducted our research on the viral video to determine if it was AI-generated. During the research we broke the video into individual still frames, and upon closely examining the frames, several of the visuals he showed us had odd-shaped visual features, abnormal body proportions, and flickering movements that don't occur in real footage. We took several still frames and checked them in image search sites to see if they had appeared before. The search results revealed that several clips in the video had appeared previously, in separate and unrelated circumstances, which indicates that they are neither recent nor original.

While examining the Instagram profile, we noticed that the account frequently shares visually dramatic AI content that appears digitally created. Many earlier posts from the same page include scenes that are unrealistic, such as wrecked aircraft in desolate areas or buildings collapsing in unnatural ways. In the current video, for instance, the fighter jets shown have multiple wings, which is not technically or aerodynamically possible in real life. The profile’s bio, which reads "Resistance of Artificial Intelligence," suggests that the page intentionally focuses on sharing AI-generated or fictional content.

We also ran the viral post through Tenorshare.AI for Deep-Fake detection, and the result came 94% AI. All findings resulting from our research established that the video is synthetic and unrelated to any event occurring in Israel, and therefore debunked a false narrative propagated on social media.

Conclusion:
Our research found that the video is fake and contains AI-generated images and is not related to any real missile strike or destruction occurring in Israel. The source is specific to fuel the panic and misinformation in a context of already-heightened geopolitical tension. We call on viewers not to share this unverified information and to rely on trusted sources. When there are sensitive international developments, the dissemination of fake imagery can promote fear, confusion, and misinformation on a global scale.
- Claim: Real Footage of Iran’s Missile Strikes on Israel
- Claimed On: Social Media
- Fact Check: False and Misleading
Related Blogs

Introduction:
This report examines ongoing phishing scams targeting "State Bank of India (SBI)" customers, India's biggest public bank using fake SelfKYC APKs to trick people. The image plays a part in a phishing plan to get users to download bogus APK files by claiming they need to update or confirm their "Know Your Customer (KYC)" info.
Fake Claim:
A picture making the rounds on social media comes with an APK file. It shows a phishing message that says the user's SBI YONO account will stop working because of their "Old PAN card." It then tells the user to install the "WBI APK" APK (Android Application Package) to check documents and keep their account open. This message is fake and aims to get people to download a harmful app.
Key Characteristics of the Scam:
- The messages "URGENTLY REQUIRED" and "Your account will be blocked today" show how scammers try to scare people into acting fast without thinking.
- PAN Card Reference: Crooks often use PAN card verification and KYC updates as a trick because these are normal for Indian bank customers.
- Risky APK Downloads: The message pushes people to get APK files, which can be dangerous. APKs from places other than the Google Play Store often have harmful software.
- Copying the Brand: The message looks a lot like SBI's real words and logos to seem legit.
- Shady Source: You can't find the APK they mention on Google Play or SBI's website, which means you should ignore the app right away.
Modus Operandi:
- Delivery Mechanism: Typically, users of messaging services like "WhatsApp," "SMS," or "email" receive identical messages with an APK link, which is how the scam is distributed.
- APK Installation: The phony APK frequently asks for a lot of rights once it is installed, including access to "SMS," "contacts," "calls," and "banking apps."
- Data Theft: Once installed, the program may have the ability to steal card numbers, personal information, OTPs, and banking credentials.
- Remote Access: These APKs may occasionally allow cybercriminals to remotely take control of the victim's device in order to carry out fraudulent financial activities.
While the user installs the application on their device the following interface opens:




It asks the user to allow the following:
- SMS is used to send and receive info from the bank.
- User details such as Username, Password, Mobile Number, and Captcha.
Technical Findings of the Application:
Static Analysis:
- File Name: SBI SELF KYC_015850.apk
- Package Name: com.mark.dot.comsbione.krishn
- Scan Date: Sept. 25, 2024, 6:45 a.m.
- App Security Score: 52/100 (MEDIUM RISK)
- Grade: B
File Information:
- File Name: SBI SELF KYC_015850.apk
- Size: 2.88MB
- MD5: 55fdb5ff999656ddbfa0284d0707d9ef
- SHA1: 8821ee6475576beb86d271bc15882247f1e83630
- SHA256: 54bab6a7a0b111763c726e161aa8a6eb43d10b76bb1c19728ace50e5afa40448
App Information:
- App Name: SBl Bank
- Package Name:: com.mark.dot.comsbione.krishn
- Main Activity: com.mark.dot.comsbione.krishn.MainActivity
- Target SDK: 34
- Min SDK: 24
- Max SDK:
- Android Version Name:: 1.0
- Android Version Code:: 1
App Components:
- Activities: 8
- Services: 2
- Receivers: 2
- Providers: 1
- Exported Activities: 0
- Exported Services: 1
- Exported Receivers: 2
- Exported Providers:: 0
Certificate Information:
- Binary is signed
- v1 signature: False
- v2 signature: True
- v3 signature: False
- v4 signature: False
- X.509 Subject: CN=PANDEY, OU=PANDEY, O=PANDEY, L=NK, ST=NK, C=91
- Signature Algorithm: rsassa_pkcs1v15
- Valid From: 20240904 07:38:35+00:00
- Valid To: 20490829 07:38:35+00:00
- Issuer: CN=PANDEY, OU=PANDEY, O=PANDEY, L=NK, ST=NK, C=91
- Serial Number: 0x1
- Hash Algorithm: sha256
- md5: 4536ca31b69fb68a34c6440072fca8b5
- sha1: 6f8825341186f39cfb864ba0044c034efb7cb8f4
- sha256: 6bc865a3f1371978e512fa4545850826bc29fa1d79cdedf69723b1e44bf3e23f
- sha512:05254668e1c12a2455c3224ef49a585b599d00796fab91b6f94d0b85ab48ae4b14868dabf16aa609c3b6a4b7ac14c7c8f753111b4291c4f3efa49f4edf41123d
- PublicKey Algorithm: RSA
- Bit Size: 2048
- Fingerprint: a84f890d7dfbf1514fc69313bf99aa8a826bade3927236f447af63fbb18a8ea6
- Found 1 unique certificate
App Permission

1. Normal Permissions
- Access_network_state: Allows the App to View the Network Status of All Networks.
- Foreground_service: Enables Regular Apps to Use Foreground Services.
- Foreground_service_data_sync: Allows Data Synchronization With Foreground Services.
- Internet: Grants Full Internet Access.
2. Signature Permission:
- Broadcast_sms: Sends Sms Received Broadcasts. It Can Be Abused by Malicious Apps to Forge Incoming Sms Messages.
3. Dangerous Permissions:
- Read_phone_numbers: Grants Access to the Device’s Phone Number(S).
- Read_phone_state: Reads the Phone’s State and Identity, Including Phone Features and Data.
- Read_sms: Allows the App to Read Sms or Mms Messages Stored on the Device or Sim Card. Malicious Apps Could Use This to Read Confidential Messages.
- Receive_sms: Enables the App to Receive and Process Sms Messages. Malicious Apps Could Monitor or Delete Messages Without Showing Them to the User.
- Send_sms: Allows the App to Send Sms Messages. Malicious Apps Could Send Messages Without the User’s Confirmation, Potentially Leading to Financial Costs.
On further analysis on virustotal platform using md5 hash file, the following results were retrieved where there are 24 security vendors out of 68, marked this apk file as malicious and the graph represents the distribution of malicious file in the environment.


Key Takeaways:
- Normal Permissions: Generally Safe for Accessing Basic Functionalities (Network State, Internet).
- Signature Permissions: May Pose Risks When Misused, Especially Related to Sms Broadcasts.
- Dangerous Permissions: Provide Sensitive Data Access, Such as Phone Numbers and Device Identity, Which Can Be Exploited by Malicious Apps.
- The Dangerous Permissions Pose Risks Regarding the Reading, Receiving, and Sending of Sms, Which Can Lead to Privacy Breaches or Financial Consequences.
How to Identify the Scam:
- Official Statement: SBI never asks clients to download unauthorized APKs for upgrades related to KYC or other services. All formal correspondence takes place via the SBI YONO app, which may be found in reputable app shops.
- No Immediate Threats: Bank correspondence never employs menacing language or issues harsh deadlines, such as "your account will be blocked today."
- Email Domain and SMS Number: Verified email addresses or phone numbers are used for official SBI correspondence. Generic, unauthorized numbers or addresses are frequently used in scams.
- Links and APK Files: Steer clear of downloading APK files from unreliable sources at all times. For app downloads, visit the Apple App Store or Google Play Store instead.
CyberPeace Advisory:
- The Research team recommends that people should avoid opening such messages sent via social platforms. One must always think before clicking on such links, or downloading any attachments from unauthorised sources.
- Downloading any application from any third party sources instead of the official app store should be avoided. This will greatly reduce the risk of downloading a malicious app, as official app stores have strict guidelines for app developers and review each app before it gets published on the store.
- Even if you download the application from an authorised source, check the app's permissions before you install it. Some malicious apps may request access to sensitive information or resources on your device. If an app is asking for too many permissions, it's best to avoid it.
- Keep your device and the app-store app up to date. This will ensure that you have the latest security updates and bug fixes.
- Falling into such a trap could result in a complete compromise of the system, including access to sensitive information such as microphone recordings, camera footage, text messages, contacts, pictures, videos, and even banking applications and could lead users to financial loss.
- Do not share confidential details like credentials, banking information with such types of Phishing scams.
- Never share or forward fake messages containing links on any social platform without proper verification.
Conclusion:
Fake APK phishing scams target financial institutions more often. This report outlines safety steps for SBI customers and ways to spot and steer clear of these cons. Keep in mind that legitimate banks never ask you to get an APK from shady websites or threaten to close your account right away. To stay safe, use SBI's official YONO app on both systems and get apps from trusted places like Google Play or the Apple App Store. Check if the info is true before you do anything turn on 2FA for all your bank and money accounts, and tell SBI or your local cyber police about any scams you see.

Introduction
The role of ‘Small and Medium Enterprises’ (SMEs) in the economic and social development of the country is well established. The SME sector is often driven by individual creativity and innovation. With its contribution at 8% of the country’s GDP, and 45% of the manufactured output and 40% of its exports, SMEs provide employment to about 60 million persons through over 26 million enterprises producing over six thousand products.
It would be an understatement to say that the SMEs sector in India is highly heterogeneous in terms of the size of the enterprises, variety of products and services produced and the levels of technology employed. With the SME sector booming across the country, these enterprises are contributing significantly to local, state, regional and national growth and feeding into India’s objectives of inclusive, sustainable development.
As the digital economy expands, SMEs cannot be left behind and must integrate online to be able to grow and prosper. This development is not without its risks and cybersecurity concerns and digital threats like misinformation are fast becoming a pressing pain point for the SME sector. The unique challenge posed to SMEs by cyber threats is that while the negative consequences of digital risks are just as damaging for the SMEs as they are for larger industries, the former’s ability to counter these threats is not at par with the latter, owing to the limited nature of resources at their disposal. The rapid development of emerging technologies like artificial intelligence makes it easier for malicious actors to develop bots, deepfakes, or other forms of manipulated content that can steer customers away from small businesses and the consequences can be devastating.
Misinformation is the sharing of inaccurate and misleading information, and the act can be both deliberate and unintentional. Malicious actors can use fake reviews, rumours, or false images to promote negative content or create backlash against a business’ brand and reputation. For a fledgling or growing enterprise, its credibility is a critical asset and any threat to the same is as much a cause for concern as any other operational hindrance.
Relationship Building to Counter Misinformation
We live in a world that is dominated by brands. A brand should ideally inspire trust. It is the single most powerful and unifying characteristic that embodies an organisation's culture and values and once well-established, can create incremental value. Businesses report industry rumours where misinformation resulted in the devaluation of a product, sowing mistrust among customers, and negatively impacting the companies’ revenue. Mitigating strategies to counter these digital downsides can include implementing greater due diligence and basic cyber hygiene practices, like two-factor or multi-factor authentication, as well as open communication of one’s experiences in the larger professional and business networks.
The loss of customer trust can be fatal for a business, and for an SME, the access to the scale of digital and other resources required to restore reputations may simply not be a feasible option. Creating your brand story is not just the selling pitch you give to customers and investors, but is also about larger qualitative factors such as your own motivation for starting the enterprise or the emotional connection your audience base enjoys with your organisation. The brand story is a mosaic of multiple tangible and intangible elements that all come together to determine how the brand is perceived by its various stakeholders. Building a compelling and fortified brand story which resonates deeply with people is an important step in developing a robust reputation. It can help innoculate against several degrees of misinformation and malicious attempts and ensure that customers continue to place their faith in the brand despite attempts to hurt this dynamic.
Engaging with the target audience, ie, the customer base is part of an effective marketing tool and misinformation inoculation strategy. SMEs should also continuously assess their strategies, adapt to market changes, and remain agile in their approach to stay competitive and relevant in today's dynamic business environment. These strategies will lead to greater customer engagement through the means of feedback, reviews and surveys which help in building trust and loyalty. Innovative and dynamic customer service engages the target audience and helps in staying in the competition and being relevant.
Crisis Management and Response
Having a crisis management strategy is an important practice for all SMEs and should be mandated for better policy implementation. Businesses need greater due diligence and basic cyber hygiene practices, like two-factor authentication, essential compliances, strong password protocols, transparent disclosure, etc.
The following steps should form part of a crisis management and response strategy:
- Assessing the damage by identifying the misinformation spread and its impact is the first step.
- Issuing a response in the form of a public statement by engaging the media should precede legal action.
- Two levels of communication need to take place in response to a misinformation attack. The first tier is internal, to the employees and it should clarify the implications of the incident and the organisation’s response plan. The other is aimed at customers via direct outreach to clarify the situation and provide accurate information in regard to the matter. If required the employees can be provided training related to the handling of the customer enquiries regarding the misinformation.
- The digital engagement of the enterprise should be promptly updated and social media platforms and online communications must address the issue and provide clarity and factual information.
- Immediate action must include a plan to rebuild reputations and trust by ensuring customers of the high quality of products and services. The management should seek customer feedback and show commitment to improving processes and transparency. Sharing positive testimonials and stories of satisfied customers can also help at this stage.
- Engaging with the community and collaborating with organisations is also an important part of crisis management.
While these steps are for rebuilding and crisis management, further steps also need to be taken:
- Monitoring customer sentiment and gauging the effectiveness of the efforts taken is also necessary. And if required, strategic adjustments can be made in response to the evolving circumstances.
- Depending on the severity of the impact, management may choose to engage the professional help of PR consultants and crisis management experts to develop comprehensive recovery plans and help navigate the situation.
- A long-term strategy which focuses on building resilience against future attacks is important. Along with this, engaging in transparency and proactive communication with stakeholders is a must.
Legal and Ethical Considerations
SMEs administrators must prioritise ethical market practices and appreciate that SMEs are subject to laws which deal with defamation, intellectual property rights- trademark and copyright infringement in particular, data protection and privacy laws and consumer protection laws. Having the knowledge of these laws and ensuring that there is no infringement upon the rights of other enterprises or their consumers is integral in order to continue engaging in business legally.
Ethical and transparent business conduct includes clear and honest communication and proactive public redressal mechanisms in the event of misinformation or mistakes. These efforts go a long way towards building trust and accountability.
Proactive public engagement is an important step in building relationships. SMEs can engage with the community where they conduct their business through outreach programs and social media engagement. Efforts to counter misinformation through public education campaigns that alert customers and other stakeholders about misinformation serve the dual purpose of countering misinformation and creating deep community ties. SME administrators should monitor content and developments in their markets and sectors to ensure that their marketing practices are ethical and not creating or spreading misinformation, be it in the form of active sensationalising of existing content or passive dissemination of misinformation created by others. Fact-checking tools and expert consultations can help address and prevent a myriad of problems and should be incorporated into everyday operations.
Conclusion
Developing strong cybersecurity protocols, practising basic digital hygiene and ensuring regulatory compliances are crucial to ensure that a business not only survives but also thrives. Therefore, a crisis management plan and trust-building along with ethical business and legal practices go a long way in ensuring the future of SMEs. In today's digital landscape, misinformation is pervasive, and trust has become a cornerstone of successful business operations. It is the bedrock of a resilient and successful SME. By implementing and continuously improving trust-building efforts, businesses can not only navigate the challenges of misinformation but also create lasting value for their customers and stakeholders. Prioritising trust ensures long-term growth and sustainability in an ever-evolving digital landscape.
References
- https://SME.gov.in/sites/default/files/SME-Strategic-Action-Plan.pdf
- https://carnegieendowment.org/research/2024/01/countering-disinformation-effectively-an-evidence-based-policy-guide?lang=en
- https://dcSME.gov.in/Report%20of%20Expert%20Committee%20on%20SMEs%20-%20The%20U%20K%20Sinha%20Committee%20constitutes%20by%20RBI.pdf

Introduction
Considering the development of technology, Voice cloning schemes are one such issue that has recently come to light. Scammers are moving forward with AI, and their methods and plans for deceiving and scamming people have also altered. Deepfake technology creates realistic imitations of a person’s voice that can be used to conduct fraud, dupe a person into giving up crucial information, or even impersonate a person for illegal purposes. We will look at the dangers and risks associated with AI voice cloning frauds, how scammers operate and how one might protect themselves from one.
What is Deepfake?
Artificial intelligence (AI), known as “deepfake,” can produce fake or altered audio, video, and film that pass for the real thing. The words “deep learning” and “fake” are combined to get the name “deep fake”. Deepfake technology creates content with a realistic appearance or sound by analysing and synthesising diverse volumes of data using machine learning algorithms. Con artists employ technology to portray someone doing something that has never been in audio or visual form. The best example is the American President, who used deep voice impersonation technology. Deep voice impersonation technology can be used maliciously, such as in deep voice fraud or disseminating false information. As a result, there is growing concerned about the potential influence of deep fake technology on society and the need for effective tools to detect and mitigate the hazards it may provide.
What exactly are deepfake voice scams?
Artificial intelligence (AI) is sometimes utilised in deepfake speech frauds to create synthetic audio recordings that seem like real people. Con artists can impersonate someone else over the phone and pressure their victims into providing personal information or paying money by using contemporary technology. A con artist may pose as a bank employee, a government official, or a friend or relative by utilising a deep false voice. It aims to earn the victim’s trust and raise the likelihood that they will fall for the hoax by conveying a false sense of familiarity and urgency. Deep fake speech frauds are increasing in frequency as deep fake technology becomes more widely available, more sophisticated, and harder to detect. In order to avoid becoming a victim of such fraud, it is necessary to be aware of the risks and take appropriate measures.
Why do cybercriminals use AI voice deep fake?
In order to mislead users into providing private information, money, or system access, cybercriminals utilise artificial intelligence (AI) speech-deep spoofing technology to claim to be people or entities. Using AI voice-deep fake technology, cybercriminals can create audio recordings that mimic real people or entities, such as CEOs, government officials, or bank employees, and use them to trick victims into taking activities that are advantageous to the criminals. This can involve asking victims for money, disclosing login credentials, or revealing sensitive information. In phishing assaults, where fraudsters create audio recordings that impersonate genuine messages from organisations or people that victims trust, deepfake AI voice technology can also be employed. These audio recordings can trick people into downloading malware, clicking on dangerous links, or giving out personal information. Additionally, false audio evidence can be produced using AI voice-deep fake technology to support false claims or accusations. This is particularly risky regarding legal processes because falsified audio evidence may lead to wrongful convictions or acquittals. Artificial intelligence voice deep fake technology gives con artists a potent tool for tricking and controlling victims. Every organisation and the general population must be informed of this technology’s risk and adopt the appropriate safety measures.
How to spot voice deepfake and avoid them?
Deep fake technology has made it simpler for con artists to edit audio recordings and create phoney voices that exactly mimic real people. As a result, a brand-new scam called the “deep fake voice scam” has surfaced. In order to trick the victim into handing over money or private information, the con artist assumes another person’s identity and uses a fake voice. What are some ways to protect oneself from deepfake voice scams? Here are some guidelines to help you spot them and keep away from them:
- Steer clear of telemarketing calls
- One of the most common tactics used by deep fake voice con artists, who pretend to be bank personnel or government officials, is making unsolicited phone calls.
- Listen closely to the voice
- Anyone who phones you pretending to be someone else should pay special attention to their voice. Are there any peculiar pauses or inflexions in their speech? Something that doesn’t seem right can be a deep voice fraud.
- Verify the caller’s identity
- It’s crucial to verify the caller’s identity in order to avoid falling for a deep false voice scam. You might ask for their name, job title, and employer when in doubt. You can then do some research to be sure they are who they say they are.
- Never divulge confidential information
- No matter who calls, never give out personal information like your Aadhar, bank account information, or passwords over the phone. Any legitimate companies or organisations will never request personal or financial information over the phone; if they do, it’s a warning sign that they’re a scammer.
- Report any suspicious activities
- Inform the appropriate authorities if you think you’ve fallen victim to a deep voice fraud. This may include your bank, credit card company, local police station, or the nearest cyber cell. By reporting the fraud, you could prevent others from being a victim.
Conclusion
In conclusion, the field of AI voice deep fake technology is fast expanding and has huge potential for beneficial and detrimental effects. While deep fake voice technology has the potential to be used for good, such as improving speech recognition systems or making voice assistants sound more realistic, it may also be used for evil, such as deep fake voice frauds and impersonation to fabricate stories. Users must be aware of the hazard and take the necessary precautions to protect themselves as AI voice deep fake technology develops, making it harder to detect and prevent deep fake schemes. Additionally, it is necessary to conduct ongoing research and develop efficient techniques to identify and control the risks related to this technology. We must deploy AI appropriately and ethically to ensure that AI voice-deep fake technology benefits society rather than harming or deceiving it.