#FactCheck

Executive Summary:
This report deals with a recent cyberthreat that took the form of a fake message carrying a title of India Post which is one of the country’s top postal services. The scam alerts recipients to the failure of a delivery due to incomplete address information and requests that they click on a link (http://iydc[.]in/u/5c0c5939f) to confirm their address. Privacy of the victims is compromised as they are led through a deceitful process, thereby putting their data at risk and compromising their security. It is highly recommended that users exercise caution and should not click on suspicious hyperlinks or messages.
False Claim:
The fraudsters send an SMS stating the status of delivery of an India Mail package which could not be delivered due to incomplete address information. They provide a deadline of 12 hours for recipients to confirm their address by clicking on the given link (http://iydc[.]in/u/5c0c5939f). This misleading message seeks to fool people into disclosing personal information or compromising the security of their device.

The Deceptive Journey:
- First Contact: The SMS is sent and is claimed to be from India Post, informs users that due to incomplete address information the package could not be delivered.
- Recipients are then expected to take action by clicking on the given link (http://iydc[.]in/u/5c0c5939f) to update the address. The message creates a panic within the recipient as they have only 12 hours to confirm their address on the suspicious link.
- Click the Link: Inquiring or worried recipients click on the link.
- User Data: When the link is clicked, it is suspected to launch possible remote scripts in the background and collect personal information from users.
- Device Compromise: Occasionally, the website might also try to infect the device with malware or take advantage of security flaws.
The Analysis:
- Phishing Technique: The scam allures its victims with a phishing technique and poses itself as the India Post Team, telling the recipients to click on a suspicious link to confirm the address as the delivery package can’t be delivered due to incomplete address.
- Fake Website Creation: Victims are redirected to a fraudulent website when they click on the link (http://iydc[.]in/u/5c0c5939f) to update their address.
- Background Scripts: Scripts performing malicious operations such as stealing the visitor information, distributing viruses are suspected to be running in the background. This script can make use of any vulnerability in the device/browser of the user to extract more info or harm the system security.
- Risk of Data Theft: This type of fraud has the potential to steal the data involved because it lures the victims into giving their personal details by creating fake urgency. The threat actors can use it for various illegal purposes such as financial fraud, identity theft and other criminal purposes in future.
- Domain Analysis: The iydc.in domain was registered on the 5th of April, 2024, just a short time ago. Most of the fraud domains that are put up quickly and utilized in criminal activities are usually registered in a short time.
- Registrar: GoDaddy.com, LLC, a reputable registrar, through which the domain is registered.
- DNS: Chase.ns.cloudflare.com and delilah.ns.cloudflare.com are the name servers used by Cloudflare to manage domain name resolution.
- Registrant: Apart from the fact that it is in Thailand, not much is known about the registrant probably because of using the privacy reduction plugins.

- Domain Name: iydc.in
- Registry Domain ID: DB3669B210FB24236BF5CF33E4FEA57E9-IN
- Registrar URL: www.godaddy.com
- Registrar: GoDaddy.com, LLC
- Registrar IANA ID: 146
- Updated Date: 2024-04-10T02:37:06Z
- Creation Date: 2024-04-05T02:37:05Z (Registered in very recent time)
- Registry Expiry Date: 2025-04-05T02:37:05Z
- Registrant State/Province: errww
- Registrant Country: TH (Thailand)
- Name Server: delilah.ns.cloudflare.com
- Name Server: chase.ns.cloudflare.com
Note: Cybercriminals used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory:
- Do not open the messages received from social platforms in which you think that such messages are suspicious or unsolicited. In the beginning, your own discretion can become your best weapon.
- Falling prey to such scams could compromise your entire system, potentially granting unauthorized access to your microphone, camera, text messages, contacts, pictures, videos, banking applications, and more. Keep your cyber world safe against any attacks.
- Never reveal sensitive data such as your login credentials and banking details to entities where you haven't validated as reliable ones.
- Before sharing any content or clicking on links within messages, always verify the legitimacy of the source. Protect not only yourself but also those in your digital circle.
- Verify the authenticity of alluring offers before taking any action.
Conclusion:
The India Post delivery scam is an example of fraudulent activity that uses the name of trusted postal services to trick people. The campaign is initiated by using deceptive texts and fake websites that will trick the recipients into giving out their personal information which can later be used for identity theft, financial losses or device security compromise. Technical analysis shows the sophisticated tactics used by fraudsters through various techniques such as phishing, data harvesting scripts and the creation of fraudulent domains with less registration history etc. While encountering such messages, it's important to verify their authenticity from official sources and take proactive measures to protect both your personal information and devices from cyber threats. People can reduce the risk of falling for online scams by staying informed and following cybersecurity best practices.

Executive Summary:
The picture of a boy making sand art of Indian Cricketer Virat Kohli spreading in social media, claims to be false. The picture which was portrayed, revealed not to be a real sand art. The analyses using AI technology like 'Hive' and ‘Content at scale AI detection’ confirms that the images are entirely generated by artificial intelligence. The netizens are sharing these pictures in social media without knowing that it is computer generated by deep fake techniques.

Claims:
The collage of beautiful pictures displays a young boy creating sand art of Indian Cricketer Virat Kohli.




Fact Check:
When we checked on the posts, we found some anomalies in each photo. Those anomalies are common in AI-generated images.

The anomalies such as the abnormal shape of the child’s feet, blended logo with sand color in the second image, and the wrong spelling ‘spoot’ instead of ‘sport’n were seen in the picture. The cricket bat is straight which in the case of sand made portrait it’s odd. In the left hand of the child, there’s a tattoo imprinted while in other photos the child's left hand has no tattoo. Additionally, the face of the boy in the second image does not match the face in other images. These made us more suspicious of the images being a synthetic media.
We then checked on an AI-generated image detection tool named, ‘Hive’. Hive was found to be 99.99% AI-generated. We then checked from another detection tool named, “Content at scale”


Hence, we conclude that the viral collage of images is AI-generated but not sand art of any child. The Claim made is false and misleading.
Conclusion:
In conclusion, the claim that the pictures showing a sand art image of Indian cricket star Virat Kohli made by a child is false. Using an AI technology detection tool and analyzing the photos, it appears that they were probably created by an AI image-generated tool rather than by a real sand artist. Therefore, the images do not accurately represent the alleged claim and creator.
Claim: A young boy has created sand art of Indian Cricketer Virat Kohli
Claimed on: X, Facebook, Instagram
Fact Check: Fake & Misleading

Executive Summary:
This report discloses a new cyber threat contributing to the list of threats targeting internet users in the name of "Aarong Ramadan Gifts". The fraudsters are imitating the popular Bangladeshi brand Aarong, which is known for its Bengali ethnic wear and handicrafts, and allure the victims with the offer of exclusive gifts for Ramadan. The moment when users click on the link, they are taken through a fictitious path of quizzes, gift boxes, and social proof, that simply could damage their personal information and system devices. Through knowing how this is done we can educate users to take caution and stop themselves from falling into cyber threats.
False Claim:
The false message accompanied by a link on social media, claims that Aarong, one of the most respected brands in Bangladesh for their exquisite ethnic wear and handicrafts, is providing Ramadan gifts exclusively through online promotion. And while that may be the facade of the scam, its real aim is to lead users to click on harmful links that may end up in their personal data and devices being compromised.

The Deceptive Journey:
- The Landing page starts with a salutation and a catchy photo of Aarong store, and later moves ahead encouraging the visitors to take a part of a short quiz to claim the gift. This is designed for the purpose of creating a false image of authenticity and trustworthiness.
- A certain area at the end of the page looks like a social media comment section, and users are posting the positive impacts the claim has on them. This is one of the techniques to build the image of a solid base of support and many partakers.
- The quiz starts with a few easy questions on how much the user knows about Aarong and their demographics. This data is vital in the development of more complex threats and can be used to address specific targets in the future.
- After the user hits the OK button, the screen displays a matrix of the Gift boxes, and the user then needs to make at least 3 attempts to attain the reward. This is a commonly used approach which allows the scammer to keep users engaged longer and increases the chances of making them comply with the fraudulent scheme.
- The user is instructed to share the campaign on WhatsApp from this point of the campaign, and the user must keep clicking the WhatsApp button until the progress bar is complete. This is a way to both expand and perpetuate the scam, affecting many more users.
- After completing the steps, the user is shown instructions on how to claim the prize.
The Analysis:
- The home page and quiz are structured to maintain a false impression of genuineness and proficiency, thus allowing the victims to partake in the fraudulent design. The compulsion to forward the message in WhatsApp is the way they inspire more and more users and eventually get into the scam.
- The final purpose of the scam could be to obtain personal data from the user and eventually enter their devices, which could lead to a higher risk of cyber threats, such as identity theft, financial theft, or malware installation.
- We have also cross-checked and as of now there is no well established and credible source or any official notification that has confirmed such an offer advertised by Aarong.
- The campaign is hosted on a third party domain instead of the official Website, this raised suspicion. Also the domain has been registered recently.
- The intercepted request revealed a connection to a China-linked analytical service, Baidu in the backend.

- Domain Name: apronicon.top
- Registry Domain ID: D20231130G10001G_13716168-top
- Registrar WHOIS Server: whois.west263[.]com
- Registrar URL: www.west263[.]com
- Updated Date: 2024-02-28T07:21:18Z
- Creation Date: 2023-11-30T03:27:17Z (Recently created)
- Registry Expiry Date: 2024-11-30T03:27:17Z
- Registrar: Chengdu west dimension digital
- Registrant State/Province: Hei Long Jiang
- Registrant Country: CN (China)
- Name Server: amos.ns.cloudflare[.]com
- Name Server: zara.ns.cloudflare[.]com
Note: Cybercriminal used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory:
- Do not open those messages received from social platforms in which you think that such messages are suspicious or unsolicited. In the beginning, your own discretion can become your best weapon.
- Falling prey to such scams could compromise your entire system, potentially granting unauthorized access to your microphone, camera, text messages, contacts, pictures, videos, banking applications, and more. Keep your cyber world safe against any attacks.
- Never, in any case, reveal such sensitive data as your login credentials and banking details to entities you haven't validated as reliable ones.
- Before sharing any content or clicking on links within messages, always verify the legitimacy of the source. Protect not only yourself but also those in your digital circle.
- For the sake of the truthfulness of offers and messages, find the official sources and companies directly. Verify the authenticity of alluring offers before taking any action.
Conclusion:
Aarong Ramadan Gift scam is a fraudulent act that takes advantage of the victims' loyalty to a reputable brand. The realization of the mechanism used to make the campaign look real, can actually help us become more conscious and take measures to our community not to be inattentive against cyberthreats. Be aware, check the credibility, and spread awareness to others wherever you can, to contribute in building a security conscious digital space.

Executive Summary:
A photo circulating on the web that claims to show the future design of the Bhabha Atomic Research Center, BARC building, has been found to be fake after fact checking has been done. Nevertheless, there is no official notice or confirmation from BARC on its website or social media handles. Through the AI Content Detection tool, we have discovered that the image is a fake as it was generated by an AI. In short, the viral picture is not the authentic architectural plans drawn up for the BARC building.

Claims:
A photo allegedly representing the new outlook of the Bhabha Atomic Research Center (BARC) building is reigning over social media platforms.


Fact Check:
To begin our investigation, we surfed the BARC's official website to check out their tender and NITs notifications to inquire for new constructions or renovations.
It was a pity that there was no corresponding information on what was being claimed.

Then, we hopped on their official social media pages and searched for any latest updates on an innovative building construction, if any. We looked on Facebook, Instagram and X . Again, there was no information about the supposed blueprint. To validate the fact that the viral image could be generated by AI, we gave a search on an AI Content Detection tool by Hive that is called ‘AI Classifier’. The tool's analysis was in congruence with the image being an AI-generated computer-made one with 100% accuracy.

To be sure, we also used another AI-image detection tool called, “isitai?” and it turned out to be 98.74% AI generated.

Conclusion:
To conclude, the statement about the image being the new BARC building is fake and misleading. A detailed investigation, examining BARC's authorities and utilizing AI detection tools, proved that the picture is more probable an AI-generated one than an original architectural design. BARC has not given any information nor announced anything for such a plan. This makes the statement untrustworthy since there is no credible source to support it.
Claim: Many social media users claim to show the new design of the BARC building.
Claimed on: X, Facebook
Fact Check: Misleading

Executive Summary:
In the context of the recent earthquake in Taiwan, a video has gone viral and is being spread on social media claiming that the video was taken during the recent earthquake that occurred in Taiwan. However, fact checking reveals it to be an old video. The video is from September 2022, when Taiwan had another earthquake of magnitude 7.2. It is clear that the reversed image search and comparison with old videos has established the fact that the viral video is from the 2022 earthquake and not the recent 2024-event. Several news outlets had covered the 2022 incident, mentioning additional confirmation of the video's origin.

Claims:
There is a news circulating on social media about the earthquake in Taiwan and Japan recently. There is a post on “X” stating that,
“BREAKING NEWS :
Horrific #earthquake of 7.4 magnitude hit #Taiwan and #Japan. There is an alert that #Tsunami might hit them soon”.

Similar Posts:


Fact Check:
We started our investigation by watching the videos thoroughly. We divided the video into frames. Subsequently, we performed reverse search on the images and it took us to an X (formally Twitter) post where a user posted the same viral video on Sept 18, 2022. Worth to notice, the post has the caption-
“#Tsunami warnings issued after Taiwan quake. #Taiwan #Earthquake #TaiwanEarthquake”

The same viral video was posted on several news media in September 2022.

The viral video was also shared on September 18, 2022 on NDTV News channel as shown below.

Conclusion:
To conclude, the viral video that claims to depict the 2024 Taiwan earthquake was from September 2022. In the course of the rigorous inspection of the old proof and the new evidence, it has become clear that the video does not refer to the recent earthquake that took place as stated. Hence, the recent viral video is misleading . It is important to validate the information before sharing it on social media to prevent the spread of misinformation.
Claim: Video circulating on social media captures the recent 2024 earthquake in Taiwan.
Claimed on: X, Facebook, YouTube
Fact Check: Fake & Misleading, the video actually refers to an incident from 2022.

Executive Summary
A recent viral message on social media such as X and Facebook, claims that the Indian Government will start charging an 18% GST on "good morning" texts from April 1, 2024. This news is misinformation. The message includes a newspaper clipping and a video that was actually part of a fake news report from 2018. The newspaper article from Navbharat Times, published on March 2, 2018, was clearly intended as a joke. In addition to this, we also found a video of ABP News, originally aired on March 20, 2018, was part of a fact-checking segment that debunked the rumor of a GST on greetings.

Claims:
The claim circulating online suggests that the Government will start applying a 18% of GST on all "Good Morning" texts sent through mobile phones from 1st of April, this year. This tax would be added to the monthly mobile bills.




Fact Check:
When we received the news, we first did some relevant keyword searches regarding the news. We found a Facebook Video by ABP News titled Viral Sach: ‘Govt to impose 18% GST on sending good morning messages on WhatsApp?’


We have watched the full video and found out that the News is 6 years old. The Research Wing of CyberPeace Foundation also found the full version of the widely shared ABP News clip on its website, dated March 20, 2018. The video showed a newspaper clipping from Navbharat Times, published on March 2, 2018, which had a humorous article with the saying "Bura na mano, Holi hain." The recent viral image is a cutout image from ABP News that dates back to the year 2018.
Hence, the recent image that is spreading widely is Fake and Misleading.
Conclusion:
The viral message claiming that the government will impose GST (Goods and Services Tax) on "Good morning" messages is completely fake. The newspaper clipping used in the message is from an old comic article published by Navbharat Times, while the clip and image from ABP News have been taken out of context to spread false information.
Claim: India will introduce a Goods and Services Tax (GST) of 18% on all "good morning" messages sent through mobile phones from April 1, 2024.
Claimed on: Facebook, X
Fact Check: Fake, made as Comic article by Navbharat Times on 2 March 2018
.webp)
Executive Summary:
A widely used news on social media is that a 3D model of Chanakya, supposedly made by Magadha DS University matches with MS Dhoni. However, fact-checking reveals that it is a 3D model of MS Dhoni not Chanakya. This MS Dhoni-3D model was created by artist Ankur Khatri and Magadha DS University does not appear to exist in the World. Khatri uploaded the model on ArtStation, calling it an MS Dhoni similarity study.

Claims:
The image being shared is claimed to be a 3D rendering of the ancient philosopher Chanakya created by Magadha DS University. However, people are noticing a striking similarity to the Indian cricketer MS Dhoni in the image.



Fact Check:
After receiving the post, we ran a reverse image search on the image. We landed on a Portfolio of a freelance character model named Ankur Khatri. We found the viral image over there and he gave a headline to the work as “MS Dhoni likeness study”. We also found some other character models in his portfolio.



Subsequently, we searched for the mentioned University which was named as Magadha DS University. But found no University with the same name, instead the name is Magadh University and it is located in Bodhgaya, Bihar. We searched the internet for any model, made by Magadh University but found nothing. The next step was to conduct an analysis on the Freelance Character artist profile, where we found that he has a dedicated Instagram channel where he posted a detailed video of his creative process that resulted in the MS Dhoni character model.

We concluded that the viral image is not a reconstruction of Indian philosopher Chanakya but a reconstruction of Cricketer MS Dhoni created by an artist named Ankur Khatri, not any University named Magadha DS.
Conclusion:
The viral claim that the 3D model is a recreation of the ancient philosopher Chanakya by a university called Magadha DS University is False and Misleading. In reality, the model is a digital artwork of former Indian cricket captain MS Dhoni, created by artist Ankur Khatri. There is no evidence of a Magadha DS University existence. There is a university named Magadh University in Bodh Gaya, Bihar despite its similar name, we found no evidence in the model's creation. Therefore, the claim is debunked, and the image is confirmed to be a depiction of MS Dhoni, not Chanakya.

Executive Summary:
A fake photo claiming to show the cricketer Virat Kohli watching a press conference by Rahul Gandhi before a match, has been widely shared on social media. The original photo shows Kohli on his phone with no trace of Gandhi. The incident is claimed to have happened on March 21, 2024, before Kohli's team, Royal Challengers Bangalore (RCB), played Chennai Super Kings (CSK) in the Indian Premier League (IPL). Many Social Media accounts spread the false image and made it viral.

Claims:
The viral photo falsely claims Indian cricketer Virat Kohli was watching a press conference by Congress leader Rahul Gandhi on his phone before an IPL match. Many Social media handlers shared it to suggest Kohli's interest in politics. The photo was shared on various platforms including some online news websites.




Fact Check:
After we came across the viral image posted by social media users, we ran a reverse image search of the viral image. Then we landed on the original image posted by an Instagram account named virat__.forever_ on 21 March.

The caption of the Instagram post reads, “VIRAT KOHLI CHILLING BEFORE THE SHOOT FOR JIO ADVERTISEMENT COMMENCE.❤️”

Evidently, there is no image of Congress Leader Rahul Gandhi on the Phone of Virat Kohli. Moreover, the viral image was published after the original image, which was posted on March 21.

Therefore, it’s apparent that the viral image has been altered, borrowing the original image which was shared on March 21.
Conclusion:
To sum up, the Viral Image is altered from the original image, the original image caption tells Cricketer Virat Kohli chilling Before the Jio Advertisement commences but not watching any politician Interview. This shows that in the age of social media, where false information can spread quickly, critical thinking and fact-checking are more important than ever. It is crucial to check if something is real before sharing it, to avoid spreading false stories.

Executive Summary:
A viral clip where the Indian batsman Virat Kohli is shown endorsing an online casino and declaring a Rs 50,000 jackpot in three days as a guarantee has been proved a fake. In the clip that is accompanied by manipulated captions, Kohli is said to have admitted to being involved in the launch of an online casino during the interview with Graham Bensinger but this is not true. Nevertheless, an investigation showed that the original interview, which was published on YouTube in the last quarter of 2023 by Bensinger, did not have the mentioned words spoken by Kohli. Besides, another AI deepfake analysis tool called Deepware labelled the viral video as a deepfake.

Claims:
The viral video states that cricket star Virat Kohli gets involved in the promotion of an online casino and ensures that the users of the site can make a profit of Rs 50,000 within three days. Conversely, the CyberPeace Research Team has just revealed that the video is a deepfake and not the original and there is no credible evidence suggesting Kohli's participation in such endorsements. A lot of the users are sharing the videos with the wrong info title over different Social Media platforms.


Fact Check:
As soon as we were informed about the news, we made use of Keyword Search to see any news report that could be considered credible about Virat Kohli promoting any Casino app and we found nothing. Therefore, we also used Reverse Image Search for Virat Kohli wearing a Black T-shirt as seen in the video to find out more about the subject. We landed on a YouTube Video by Graham Bensinger, an American Journalist. The clip of the viral video was taken from this original video.

In this video, he discussed his childhood, his diet, his cricket training, his marriage, etc. but did not mention anything regarding a newly launched Casino app by the cricketer.
Through close scrutiny of the viral video we have noticed some inconsistencies in the lip-sync and voice. Subsequently, we executed Deepfake Detection in Deepware tool and identified it to be Deepfake Detected.


Finally, we affirm that the Viral Video Is Deepfakes Video and the statement made is False.
Conclusion:
The video has gone viral and claims that cricketer Virat Kohli is the one endorsing an online casino and assuring you that in three days time you will be a guaranteed winner of Rs 50,000. This is all a fake story. This incident demonstrates the necessity of checking facts and a source before believing any information, as well as remaining sceptical about deepfakes and AI (artificial intelligence), which is a new technology used nowadays for spreading misinformation.

Executive Summary:
A viral image circulating on social media claims it to be a natural optical illusion from Epirus, Greece. However, upon fact-checking, it was found that the image is an AI-generated artwork created by Iranian artist Hamidreza Edalatnia using the Stable Diffusion AI tool. CyberPeace Research Team found it through reverse image search and analysis with an AI content detection tool named HIVE Detection, which indicated a 100% likelihood of AI generation. The claim of the image being a natural phenomenon from Epirus, Greece, is false, as no evidence of such optical illusions in the region was found.

Claims:
The viral image circulating on social media depicts a natural optical illusion from Epirus, Greece. Users share on X (formerly known as Twitter), YouTube Video, and Facebook. It’s spreading very fast across Social Media.

Similar Posts:


Fact Check:
Upon receiving the Posts, the CyberPeace Research Team first checked for any Synthetic Media detection, and the Hive AI Detection tool found it to be 100% AI generated, which is proof that the Image is AI Generated. Then, we checked for the source of the image and did a reverse image search for it. We landed on similar Posts from where an Instagram account is linked, and the account of similar visuals was made by the creator named hamidreza.edalatnia. The account we landed posted a photo of similar types of visuals.

We searched for the viral image in his account, and it was confirmed that the viral image was created by this person.

The Photo was posted on 10th December, 2023 and he mentioned using AI Stable Diffusion the image was generated . Hence, the Claim made in the Viral image of the optical illusion from Epirus, Greece is Misleading.
Conclusion:
The image claiming to show a natural optical illusion in Epirus, Greece, is not genuine, and it's False. It is an artificial artwork created by Hamidreza Edalatnia, an artist from Iran, using the artificial intelligence tool Stable Diffusion. Hence the claim is false.

Executive Summary:
In the age of virtuality, misinformation and misleading techniques shape the macula of the internet, and these threaten human safety and well-being. Recently, an alarming fake information has surfaced, intended to provide a fake Government subsidy scheme with the name of Indian Post. This serves criminals, who attack people's weaknesses, laying them off with proposals of receiving help in exchange for info. In this informative blog, we take a deep dive into one of the common schemes of fraud during this time. We will go through the stages involved which illustrates how one is deceived and offer practical tips to avoid the fall.
Introduction:
Digital communication reaches individuals faster, and as a result, misinformation and mails have accelerated their spread globally. People, therefore, are susceptible to online scams as they add credibility to phenomena. In India, the recently increased fake news draws its target with the deceptive claims of being a subsidy from the Government mainly through the Indian post. These fraudulent schemes frequently are spread via social networks and messaging platforms, influence trust of the individual’s in respectable establishments to establish fraud and collect private data.
Understanding the Claim:
There is a claim circulating on the behalf of the Government at the national level of a great subsidy of $1066 for deserving residents. The individual will be benefited with the subsidy when they complete the questionnaire they have received through social media. The questionnaire may have been designed to steal the individual’s confidential information by way of taking advantage of naivety and carelessness.
The Deceptive Journey Unveiled:
Bogus Offer Presentation: The scheme often appeals to people, by providing a misleading message or a commercial purposely targeted at convincing them to act immediately by instilling the sense of an urgent need. Such messages usually combine the mood of persuasion and highly evaluative material to create an illusion of being authentic.
Questionnaire Requirement: After the visitors land on attractive content material they are directed to fill in the questionnaire which is supposedly required for processing the economic assistance. This questionnaire requests for non private information in their nature.
False Sense of Urgency: Simultaneously, in addition to the stress-causing factor of it being a fake news, even the false deadline may be brought out to push in the technique of compliance. This data collection is intended to put people under pressure and influence them to make the information transfer that immediate without thorough examination.
Data Harvesting Tactics: Despite the financial help actually serving, you might be unaware but lies beneath it is a vile motive, data harvesting. The collection of facts through questionnaires may become something priceless for scammers that they can use for a good while to profit from identity theft, financial crimes and other malicious means.
Analysis Highlights:
- It is important to note that at this particular point, there has not been any official declaration or a proper confirmation of an offer made by the India Post or from the Government. So, people must be very careful when encountering such messages because they are often employed as lures in phishing attacks or misinformation campaigns. Before engaging or transmitting such claims, it is always advisable to authenticate the information from trustworthy sources in order to protect oneself online and prevent the spread of wrongful information
- The campaign is hosted on a third party domain instead of any official Government Website, this raised suspicion. Also the domain has been registered in very recent times.

- Domain Name: ccn-web[.]buzz
- Registry Domain ID: D6073D14AF8D9418BBB6ADE18009D6866-GDREG
- Registrar WHOIS Server: whois[.]namesilo[.]com
- Registrar URL: www[.]namesilo[.]com
- Updated Date: 2024-02-27T06:17:21Z
- Creation Date: 2024-02-11T03:23:08Z
- Registry Expiry Date: 2025-02-11T03:23:08Z
- Registrar: NameSilo, LLC
- Name Server: tegan[.]ns[.]cloudflare[.]com
- Name Server: nikon[.]ns[.]cloudflare[.]com
Note: Cybercriminal used Cloudflare technology to mask the actual IP address of the fraudulent website.
CyberPeace Advisory:
Verification and Vigilance: It makes complete sense in this case that you should be cautious and skeptical. Do not fall prey to this criminal act. Examine the arguments made and the facts provided by either party and consult credible sources before disclosures are made.
Official Channels: Governments usually invoke the use of reliable channels which can as well be by disseminating subsidies and assistance programs through official websites and the legal channels. Take caution for schemes that are not following the protocols previously established.
Educational Awareness: Providing awareness through education and consciousness about on-line scams and the approaches which are fraudulent has to be considered a primary requirement. Through empowering individuals with capabilities and targets we, as a collective, can be armed with information that will prevent erroneous scheme spreading.
Reporting and Action: In a case of mission suspicious and fraudulent images, let them understand immediately by making the authorities and necessary organizations alert. Your swift actions do not only protect yourself but also help others avoid the costs of related security compromises.
Conclusion:
The rise of the ‘Indian Post Countrywide - government subsidy fake news’ poses a stern warning of the present time that the dangers within the virtual ecosystem are. The art of being wise and sharp in terms of scams always reminds us to show a quick reaction to the hacks and try to do the things that we should identify as per the CyberPeace advisories; thereby, we will contribute to a safer Cyberspace for everyone. Likewise, the ability to critically judge, and remain alert, is important to help defeat the variety of tricks offenders use to mislead you online.

Executive Summary:
A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials has been found out to be AI-generated. This viral image however falsely claims to show President Biden authorizing US military action in the Middle East. The Cyberpeace Research Team has identified that the photo is generated by generative AI and not real. Multiple visual discrepancies in the picture mark it as a product of AI.
Claims:
A viral image claiming to be US President Joe Biden wearing a military outfit during a meeting with military officials has been created using artificial intelligence. This picture is being shared on social media with the false claim that it is of President Biden convening to authorize the use of the US military in the Middle East.

Similar Post:

Fact Check:
CyberPeace Research Team discovered that the photo of US President Joe Biden in a military uniform at a meeting with military officials was made using generative-AI and is not authentic. There are some obvious visual differences that plainly suggest this is an AI-generated shot.

Firstly, the eyes of US President Joe Biden are full black, secondly the military officials face is blended, thirdly the phone is standing without any support.
We then put the image in Image AI Detection tool

The tool predicted 4% human and 96% AI, Which tells that it’s a deep fake content.
Let’s do it with another tool named Hive Detector.

Hive Detector predicted to be as 100% AI Detected, Which likely to be a Deep Fake Content.
Conclusion:
Thus, the growth of AI-produced content is a challenge in determining fact from fiction, particularly in the sphere of social media. In the case of the fake photo supposedly showing President Joe Biden, the need for critical thinking and verification of information online is emphasized. With technology constantly evolving, it is of great importance that people be watchful and use verified sources to fight the spread of disinformation. Furthermore, initiatives to make people aware of the existence and impact of AI-produced content should be undertaken in order to promote a more aware and digitally literate society.
- Claim: A circulating picture which is said to be of United States President Joe Biden wearing military uniform during a meeting with military officials
- Claimed on: X
- Fact Check: Fake