Skip to content

Acceptable Use Policy

Effective Date: [DATE]

Last Updated: [DATE]


This Acceptable Use Policy (“AUP” or “Policy”) establishes the rules and standards governing the use of the Honeycomb platform, including all websites, web applications, mobile applications, application programming interfaces (APIs), and related services operated by Mindhyv, LLC (“Mindhyv,” “we,” “us,” or “our”) (collectively, the “Platform”). This Policy is designed to protect the safety, integrity, and reliability of the Platform and its users, and to ensure compliance with applicable laws and regulations.

This Policy applies to all individuals and entities that access or use the Platform in any capacity, including but not limited to:

  • Registered users (account holders)
  • Content creators and publishers
  • Buyers and sellers on the Marketplace
  • Course creators and learners
  • Employers and job seekers
  • Advertisers
  • Affiliate program participants
  • Site builder users
  • API consumers and third-party integrators
  • Developers of app extensions
  • Visitors who access publicly available content without an account

(collectively, “Users” or “you”).

This Policy applies to all content posted, uploaded, transmitted, shared, or otherwise made available through the Platform (“User Content”), all conduct occurring on or through the Platform, and all use of Platform features, tools, and services, including but not limited to social features (posts, stories, direct messages, rooms), the Marketplace, courses, job listings, advertisements, the site builder, AI-powered features, e-signature tools, the affiliate program, and all app extensions.

This Policy is incorporated by reference into the Honeycomb Terms of Service and supplements all other applicable Platform policies, including the Privacy Policy, Community Guidelines, and Cookie Policy. In the event of a conflict between this Policy and the Terms of Service, the Terms of Service shall control.

By accessing or using the Platform, you acknowledge that you have read, understood, and agree to be bound by this Policy. If you do not agree to this Policy, you must immediately discontinue your use of the Platform.


You may not post, upload, transmit, share, or otherwise make available on the Platform any content that falls within the following categories. These prohibitions apply to all forms of content, including but not limited to text, images, photographs, videos, audio, graphics, links, files, code, and any other material.

Content that violates any applicable local, state, national, or international law or regulation. This includes, without limitation, content that:

  • Promotes, facilitates, or provides instructions for illegal activity.
  • Constitutes or facilitates the commission of a crime.
  • Violates export control laws, sanctions, or trade restrictions.
  • Constitutes illegal gambling or promotes unlicensed gambling services.
  • Facilitates or promotes illegal drug trafficking or the manufacture of controlled substances.
  • Violates intellectual property laws, including copyright, trademark, patent, or trade secret laws (except as permitted by applicable fair use or fair dealing exceptions).

Any content that depicts, promotes, glorifies, or facilitates the sexual exploitation or abuse of minors, including but not limited to:

  • Any visual depiction (photograph, video, computer-generated image, or digitally altered image) of a minor engaged in sexually explicit conduct.
  • Any content that constitutes child sexual abuse material as defined under 18 U.S.C. sections 2251-2260 or equivalent provisions under applicable law.
  • Content that sexualizes minors in any way, including suggestive or provocative depictions, even if the minor is not depicted in an explicitly sexual manner.
  • AI-generated or AI-manipulated imagery depicting minors in sexual or sexualized contexts.
  • Content that promotes, facilitates, or provides guidance for the grooming, exploitation, trafficking, or abuse of minors.
  • Links to external websites or services that host or distribute CSAM.

We maintain a zero-tolerance policy for CSAM. Any User Content that we reasonably believe constitutes CSAM will be immediately removed, the associated account will be permanently terminated without prior notice, and the content will be reported to the National Center for Missing & Exploited Children (NCMEC) and applicable law enforcement authorities in accordance with our legal obligations under 18 U.S.C. section 2258A and other applicable law.

Content that threatens, promotes, glorifies, incites, or facilitates violence against any individual, group, or entity, including but not limited to:

  • Credible threats of physical violence, including threats to kill, assault, or injure another person.
  • Content that glorifies, celebrates, or trivializes acts of violence, mass casualty events, or terrorism.
  • Content that incites or encourages others to commit acts of violence.
  • Graphic depictions of violence against humans or animals that are shared for shock value, entertainment, or to intimidate others (as distinguished from newsworthy, educational, or documentary content shared with appropriate context and content warnings).
  • Content that promotes, provides instructions for, or facilitates acts of terrorism, including recruitment materials, propaganda, and operational planning.
  • Content that promotes or glorifies self-harm or suicide, or that provides methods or instructions for self-harm or suicide (crisis resources and educational content addressing these topics with appropriate sensitivity and context are permitted).

Content that attacks, demeans, dehumanizes, or incites hatred or discrimination against individuals or groups based on protected characteristics, including but not limited to:

  • Race, ethnicity, or national origin
  • Religion or lack thereof
  • Sex, gender, or gender identity
  • Sexual orientation
  • Disability (physical, mental, intellectual, or sensory)
  • Age
  • Veteran or military status
  • Immigration or citizenship status
  • Caste

Hate speech includes, without limitation: slurs and derogatory epithets directed at protected groups; claims that members of a protected group are inherently inferior, dangerous, or subhuman; calls for exclusion, segregation, or violence against protected groups; content that denies or distorts well-documented historical atrocities committed against protected groups; and dehumanizing comparisons or characterizations of members of protected groups.

Content or behavior that is intended to or has the effect of harassing, bullying, intimidating, or abusing another person, including but not limited to:

  • Sustained, targeted, and unwanted contact with another user after being asked to stop.
  • Coordinated harassment campaigns (also known as “brigading” or “pile-ons”) directed at a specific individual or group.
  • Sexual harassment, including unwanted sexual advances, sexually explicit messages, or the non-consensual sharing of intimate or sexually explicit images or videos of another person (commonly known as “revenge porn” or “non-consensual intimate imagery”).
  • Cyberstalking, defined as a pattern of conduct directed at a specific person that would cause a reasonable person to feel fear for their safety or the safety of others.
  • Content that mocks, ridicules, or demeans individuals based on their physical appearance, disability, medical condition, or personal circumstances.
  • Deliberately misgendering or deadnaming transgender individuals with the intent to harass or demean.

Content that exposes, publishes, or threatens to expose the private, personally identifiable information of another person without their explicit consent, including but not limited to:

  • Home addresses or physical locations.
  • Personal phone numbers or private email addresses.
  • Government-issued identification numbers (e.g., Social Security numbers, passport numbers, driver’s license numbers).
  • Financial account information (e.g., bank account numbers, credit card numbers).
  • Medical or health information.
  • Private photographs or recordings obtained without consent.
  • Information about a person’s minor children, including their names, photographs, school locations, or daily routines.
  • Any other information that, if disclosed, could be used to identify, locate, contact, or harm an individual or their family members.

This prohibition applies regardless of whether the information is publicly available through other sources. The aggregation and targeted dissemination of otherwise public information with the intent to harass, threaten, or endanger an individual constitutes doxxing under this Policy.

Content or behavior that is unsolicited, repetitive, deceptive, or designed to artificially manipulate Platform metrics or user attention, including but not limited to:

  • Sending unsolicited bulk messages, promotional materials, or advertisements via direct messages, rooms, comments, or other communication features.
  • Posting repetitive, substantially identical, or low-quality content across multiple areas of the Platform.
  • Creating multiple accounts for the purpose of artificially inflating engagement metrics (e.g., likes, follows, views), evading enforcement actions, or manipulating Platform features.
  • Using misleading or deceptive tactics to drive traffic, clicks, or engagement (commonly known as “clickbait” when it involves materially misleading titles, thumbnails, or descriptions).
  • Purchasing or selling engagement metrics (e.g., likes, followers, views, comments) from or to third parties.
  • Participating in coordinated inauthentic behavior, including the use of fake accounts, bot networks, or paid actors to artificially amplify content, manipulate discussions, or create a false impression of popularity or consensus.

Content that contains, distributes, or facilitates the distribution of malicious software, code, or technical exploits, including but not limited to:

  • Viruses, worms, trojans, ransomware, spyware, adware, or any other form of malware.
  • Phishing pages, deceptive login forms, or other content designed to trick users into revealing their credentials, personal information, or financial information.
  • Links to websites or downloads that contain malware or exploit known security vulnerabilities.
  • Code or scripts designed to disrupt, damage, or gain unauthorized access to the Platform, its infrastructure, or the devices of other users.
  • Tools, tutorials, or instructions for creating or distributing malware, unless shared in a clearly educational or cybersecurity research context with appropriate safeguards.

Content or conduct that falsely represents or implies an affiliation, endorsement, or identity, including but not limited to:

  • Creating accounts, profiles, or pages that impersonate another real person, brand, organization, or government entity without clear disclosure that the account is a parody, fan account, or commentary account.
  • Using another person’s name, likeness, photograph, biographical details, or other identifying information to create a false impression that you are that person or are authorized to act on their behalf.
  • Falsely claiming to be a Mindhyv employee, contractor, representative, or agent.
  • Falsely claiming to hold a professional credential, certification, or license that you do not possess.
  • Using the Platform’s creator verification features to falsely verify an identity or affiliation.

Parody, satire, fan, and commentary accounts are permitted provided that they are clearly and prominently labeled as such in the account name and/or biography, and that no reasonable person would be confused as to the account’s true nature.

Content that is demonstrably false or materially misleading and that poses a significant risk of harm, including but not limited to:

  • Health Misinformation: False or misleading claims about medical treatments, vaccines, diseases, or public health measures that could lead individuals to forego medically necessary treatment, engage in dangerous health practices, or undermine public health responses to epidemics or pandemics.
  • Election and Civic Misinformation: False or misleading information about voting procedures, election dates, candidate eligibility, election results, or other civic processes that could suppress voter participation or undermine the integrity of democratic processes.
  • Dangerous Conspiracy Theories: False claims that have been used to incite violence, harassment campaigns, or discrimination against identifiable individuals or groups.
  • Manipulated Media: Media (images, video, or audio) that has been materially altered, edited, or fabricated in a way that is not clearly disclosed and that could mislead viewers about the depicted events, statements, or circumstances.
  • Fraudulent Schemes: Content that promotes scams, Ponzi schemes, pyramid schemes, or other fraudulent financial schemes.

We recognize that distinguishing between misinformation and legitimate debate on contested topics requires careful judgment. We will consider the following factors when evaluating content under this section: (a) the degree of scientific or expert consensus on the topic; (b) the potential for real-world harm; (c) whether the content is presented as established fact or as opinion, speculation, or satire; and (d) the availability of context, corrections, or alternative viewpoints.


In addition to the content restrictions set forth in Section 2, the following conduct is prohibited on the Platform:

Any conduct that manipulates or attempts to manipulate the Platform’s systems, algorithms, features, or ranking mechanisms, including but not limited to:

  • Exploiting bugs, glitches, or vulnerabilities in the Platform’s software for personal advantage or to the detriment of other users.
  • Manipulating search rankings, recommendation algorithms, or content discovery features through artificial or deceptive means.
  • Artificially inflating or deflating engagement metrics, including through the use of bots, automated scripts, click farms, or coordinated inauthentic behavior.
  • Manipulating the Platform’s reporting or content moderation systems by filing false, frivolous, or bad-faith reports against other users or their content.
  • Interfering with or manipulating the Platform’s advertising auction, delivery, or measurement systems.

3.2 Data Scraping and Unauthorized Data Collection

Section titled “3.2 Data Scraping and Unauthorized Data Collection”

The unauthorized collection, extraction, or harvesting of data from the Platform, including but not limited to:

  • Scraping, crawling, or using automated means to access, collect, or index Platform content, user data, or metadata without our prior written authorization.
  • Collecting, aggregating, or storing the personal information of other users (including usernames, email addresses, profile data, or content) for purposes not expressly authorized by the Platform.
  • Using data obtained from the Platform to build or contribute to databases, directories, or datasets for commercial or non-commercial purposes without our prior written authorization.
  • Accessing the Platform’s API in a manner that exceeds authorized rate limits or uses API credentials that were not issued to you.

This prohibition does not apply to (a) search engine indexing that complies with our robots.txt directives, or (b) the use of Platform APIs in strict compliance with our published API documentation and terms of use.

The use of automated tools, scripts, bots, or software to interact with the Platform in a manner that is not expressly authorized, including but not limited to:

  • Automating account creation, login, or registration processes.
  • Automating the posting, liking, commenting, sharing, following, messaging, or other engagement actions.
  • Automating the purchase of Marketplace items, course enrollments, or other transactional actions.
  • Using automated tools to circumvent rate limits, CAPTCHA challenges, or other anti-abuse mechanisms.
  • Operating bots that interact with other users without clearly disclosing their automated nature.

Authorized integrations using officially published APIs and in compliance with API documentation and terms are exempt from this prohibition.

Any attempt to circumvent, disable, interfere with, or bypass the Platform’s security, authentication, authorization, or content moderation measures, including but not limited to:

  • Attempting to access accounts, data, or systems that you are not authorized to access.
  • Attempting to bypass content filters, blacklists, or other content moderation mechanisms.
  • Circumventing account suspensions, bans, or other enforcement actions by creating new accounts, using alternative identities, or employing technical means to evade detection (commonly known as “ban evasion”).
  • Attempting to intercept, decrypt, or reverse-engineer communications or data transmissions between users or between users and the Platform.
  • Probing, scanning, or testing the vulnerability of the Platform’s systems or networks without prior written authorization.
  • Attempting to decompile, disassemble, or reverse-engineer the Platform’s software, except to the extent expressly permitted by applicable law.

We maintain a responsible disclosure program for security researchers who identify vulnerabilities in good faith. Contact [SECURITY_EMAIL] for details.

Any conduct that interferes with or disrupts the normal operation of the Platform, its infrastructure, or the experience of other users, including but not limited to:

  • Launching or facilitating denial-of-service (DoS) or distributed denial-of-service (DDoS) attacks against the Platform or its infrastructure.
  • Transmitting data or making requests that impose an unreasonable or disproportionate load on the Platform’s infrastructure.
  • Introducing malicious code, scripts, or exploits into the Platform’s systems.
  • Interfering with other users’ ability to access or use the Platform.
  • Engaging in any activity that degrades the performance, availability, or reliability of the Platform.

In addition to the general content and conduct prohibitions set forth in Sections 2 and 3, the following specific prohibitions apply to the Honeycomb Marketplace:

  • Listing, selling, or distributing counterfeit goods, including products bearing unauthorized reproductions of trademarks, logos, or brand identifiers.
  • Listing, selling, or distributing goods that infringe upon the intellectual property rights of third parties, including copyrights, trademarks, patents, and trade secrets.
  • Misrepresenting the origin, authenticity, quality, or condition of goods or services.

The following products and services may not be listed, sold, offered, or distributed through the Marketplace:

  • Illegal drugs, controlled substances, or drug paraphernalia (except as permitted by applicable law and with appropriate licensing).
  • Weapons, ammunition, explosives, or weapon accessories that are illegal to sell under applicable law.
  • Stolen property or goods obtained through illegal means.
  • Human organs, tissues, blood, or other human biological materials.
  • Endangered or protected species, wildlife products, or products derived from endangered species in violation of applicable law.
  • Products that have been recalled by a government agency or manufacturer due to safety concerns.
  • Products that are regulated and require specific licenses, permits, or certifications that the seller does not possess.
  • Tobacco products, electronic cigarettes, or vaping products (unless in compliance with all applicable regulations and age verification requirements).
  • Alcohol (unless in compliance with all applicable regulations and age verification requirements).
  • Prescription medications or controlled medical devices without appropriate authorization.
  • Services that involve or facilitate illegal activity.
  • Academic fraud services, including essay mills, exam-taking services, and fake credential services.

4.3 Price Manipulation and Fraudulent Practices

Section titled “4.3 Price Manipulation and Fraudulent Practices”
  • Engaging in price gouging, price fixing, bid rigging, or other anti-competitive pricing practices.
  • Listing items at artificially inflated or deflated prices for the purpose of manipulating search rankings, deceiving buyers, or facilitating money laundering.
  • Engaging in shill bidding (bidding on your own listings or having associates bid on your listings to artificially inflate prices).
  • Failing to deliver goods or services after receiving payment, or delivering goods or services that are materially different from what was described in the listing.
  • Using fake or manipulated reviews, ratings, or testimonials to deceive potential buyers.
  • Engaging in bait-and-switch tactics, where a product is advertised at a low price to attract buyers and then substituted with a different or inferior product at a higher price.

The following prohibitions apply specifically to the use of AI-powered features, tools, and capabilities available on the Platform, including but not limited to AI content generation, AI-assisted editing, AI-powered recommendations, and any other feature that utilizes artificial intelligence or machine learning models:

Using AI features to generate content that would otherwise be prohibited under Section 2 of this Policy, including but not limited to: content that violates applicable law, CSAM, threats of violence, hate speech, harassment, doxxing material, or content that facilitates illegal activity.

Using AI features to create, distribute, or promote synthetic or manipulated media (“deepfakes”) that:

  • Depict real individuals saying or doing things they did not actually say or do, without their explicit consent and without clear and prominent disclosure that the content is AI-generated or AI-manipulated.
  • Are designed to deceive viewers into believing the depicted events, statements, or circumstances are real.
  • Are used to harass, defame, defraud, blackmail, or otherwise harm the depicted individual.
  • Depict real individuals in sexually explicit or intimate contexts without their explicit consent, regardless of whether a disclosure is included.
  • Depict minors in any harmful, sexualized, violent, or exploitative context.

Using AI features to impersonate another person, brand, organization, or entity, including but not limited to:

  • Using AI to generate content that mimics the writing style, voice, appearance, or mannerisms of a specific real person without their consent and without clear disclosure.
  • Using AI-generated voices, likenesses, or personas to create false endorsements, testimonials, or representations.
  • Using AI to generate fake customer reviews, ratings, or testimonials.

Using AI features to generate and distribute large volumes of low-quality, repetitive, misleading, or unsolicited content for the purpose of:

  • Spamming other users or flooding Platform features with AI-generated content.
  • Manipulating search rankings, recommendation algorithms, or content discovery features.
  • Artificially inflating engagement metrics or creating a false impression of activity.
  • Conducting large-scale phishing, social engineering, or scam campaigns.
  • Overwhelming content moderation systems.

Attempting to circumvent, bypass, or manipulate the safety measures, content filters, or usage restrictions built into the Platform’s AI features, including but not limited to:

  • Using prompt injection, jailbreaking, or other techniques to cause AI features to produce content that violates this Policy.
  • Attempting to extract proprietary model weights, training data, system prompts, or other confidential information from the Platform’s AI systems.
  • Using AI features in a manner that exceeds authorized usage limits or that is designed to impose an unreasonable load on AI infrastructure.

Mindhyv reserves the right to investigate and take appropriate enforcement action against any User who violates this Policy. Enforcement actions are applied based on the severity, frequency, and context of the violation, and may include one or more of the following measures:

A formal written notice to the User identifying the specific violation and informing the User that continued violations may result in escalated enforcement action.

Typical application: First-time, minor violations where the User may not have been aware of the specific policy requirement, such as inadvertent posting of content in the wrong category, minor spam behavior, or unintentional inclusion of sensitive content without the appropriate content flag.

The removal, de-listing, or disabling of specific User Content that violates this Policy. Content removal may occur with or without prior notice to the User, depending on the severity and urgency of the violation.

Typical application: Content that clearly violates a specific provision of this Policy, including prohibited content categories (Section 2), Marketplace violations (Section 4), or AI-specific violations (Section 5). Content removal may be applied as a standalone measure or in conjunction with other enforcement actions.

The temporary restriction of a User’s access to all or part of the Platform for a defined period. Temporary suspensions may be imposed at the following durations:

  • 24-hour suspension: Applied for moderate violations that pose an immediate but limited risk, such as a first-time incident of targeted harassment, posting borderline prohibited content, or minor Marketplace policy violations. During a 24-hour suspension, the User’s account and content remain intact but the User cannot post, comment, message, transact, or otherwise interact with the Platform.

  • 7-day suspension: Applied for serious violations or repeated moderate violations, such as a pattern of harassment, posting clearly prohibited content (e.g., graphic violence without context, hate speech), significant Marketplace fraud, or repeated spam after a prior warning. During a 7-day suspension, the User’s account and content remain intact but the User cannot access the Platform.

  • 30-day suspension: Applied for severe violations or a persistent pattern of policy violations despite prior enforcement actions, such as repeated hate speech, sustained harassment campaigns, significant fraud or deceptive practices, or AI-specific violations that pose a meaningful risk of harm. During a 30-day suspension, the User’s account and content remain intact but the User cannot access the Platform. A 30-day suspension serves as a final warning before permanent termination.

During any temporary suspension, the User will receive a notification specifying: (a) the reason for the suspension, (b) the specific Policy provision(s) violated, (c) the duration of the suspension, (d) the date and time the suspension will be lifted, and (e) information about the appeals process.

The permanent termination of the User’s account and access to the Platform. Permanent bans result in the deletion of the User’s account, the removal of all associated User Content, and the forfeiture of any unredeemed credits, commissions, or balances, subject to applicable law and any outstanding legal or financial obligations.

Typical application: The most severe violations, including but not limited to: posting or distributing CSAM; credible threats of violence; repeated or egregious hate speech after prior enforcement actions; sustained, severe harassment campaigns; significant fraud or financial crimes; distribution of malware; ban evasion (creating new accounts after a prior permanent ban); or any violation that poses an imminent threat to the safety of Platform users or the integrity of the Platform.

Users who are permanently banned are prohibited from creating new accounts, accessing the Platform through other accounts, or using any means to circumvent the ban.

In cases involving potential criminal activity, imminent threats to safety, CSAM, terrorism-related content, or other violations that may constitute violations of applicable law, Mindhyv may, and in certain cases is legally required to, report the violation and associated account information to the appropriate law enforcement authorities or government agencies. Law enforcement referrals may be made in addition to any other enforcement action and may occur without prior notice to the User.

Typical application: All confirmed or suspected CSAM (mandatory reporting under 18 U.S.C. section 2258A); credible and imminent threats of violence against identified individuals or groups; content related to terrorism or terrorist financing; evidence of human trafficking; significant financial fraud; distribution of malware that has caused or is likely to cause substantial harm; and any other activity that Mindhyv reasonably believes constitutes a violation of criminal law.

Mindhyv may also take the following additional measures as appropriate:

  • Restricting specific account features (e.g., revoking messaging privileges, Marketplace selling privileges, or advertising capabilities) without a full suspension.
  • Requiring the User to complete additional verification steps before regaining full access.
  • Applying content-level restrictions, such as requiring manual review of all content posted by the User before publication.
  • Adding the User or specific content to internal blacklists or content filter databases.
  • Removing the User from the affiliate program and forfeiting unpaid commissions earned through policy-violating activity.
  • Suspending or terminating sites created through the site builder if those sites are used to host content that violates this Policy.

Users who have received an enforcement action under this Policy have the right to appeal the decision. Appeals may be submitted for any enforcement action, including warnings, content removal, temporary suspensions, and permanent bans.

To submit an appeal, the User must send a written appeal to [APPEALS_EMAIL] within thirty (30) calendar days of the date the enforcement action was imposed. The appeal must include:

  1. The User’s full name and the username or email address associated with the affected account.
  2. The date of the enforcement action.
  3. A description of the enforcement action received (e.g., warning, content removal, suspension type, permanent ban).
  4. A clear and specific explanation of why the User believes the enforcement action was imposed in error, including any relevant facts, context, or evidence that was not considered in the original decision.
  5. Any supporting documentation, screenshots, or other evidence that supports the appeal.

Appeals that do not include the required information may be returned to the User for supplementation.

Upon receipt of a complete appeal, Mindhyv will:

  1. Acknowledge receipt of the appeal within five (5) business days.
  2. Assign the appeal to a reviewer who was not involved in the original enforcement decision.
  3. Conduct a thorough review of the appeal, including re-examination of the original content or conduct at issue, consideration of any new evidence or context provided by the User, and evaluation of whether the enforcement action was consistent with this Policy.
  4. Issue a written decision within fourteen (14) business days of receiving the complete appeal.

The appeal review may result in one of the following outcomes:

  • Upheld: The original enforcement action is affirmed and remains in effect.
  • Modified: The enforcement action is adjusted (e.g., a suspension duration is reduced, or a permanent ban is converted to a temporary suspension).
  • Overturned: The enforcement action is reversed, and the User’s account and/or content are restored to their prior state, to the extent technically feasible.

The decision issued on appeal is final and is not subject to further review, except where required by applicable law. Mindhyv reserves the right to decline to consider multiple appeals for the same enforcement action unless the User provides materially new information that was not available at the time of the original appeal.


Users are encouraged to report content and conduct that they believe violates this Policy. Reports can be submitted through the following methods:

  • In-Platform Reporting: Use the report feature available on all content items (posts, comments, messages, Marketplace listings, profiles, etc.) by clicking or tapping the report icon or “Report” option and selecting the applicable violation category.
  • Email: Send a detailed report to [REPORTS_EMAIL], including a description of the violation, links to the content at issue (if applicable), and any supporting evidence.
  • Dedicated Reporting Channels: For specific categories of violations, dedicated reporting channels may be available:
    • CSAM and child exploitation: [CSAM_REPORT_EMAIL]
    • Imminent threats of violence: [URGENT_REPORT_EMAIL]
    • Intellectual property infringement: [IP_REPORT_EMAIL]

When a report is submitted:

  1. The report is logged in our content moderation system and assigned a unique reference number.
  2. The reported content and/or account is reviewed by a trained content moderator (or, for certain categories, an automated content moderation system with human oversight).
  3. The moderator evaluates the report against this Policy, considering the content, context, and any applicable exceptions or defenses.
  4. An enforcement decision is made and applied, if warranted.
  5. The reporting User receives a notification confirming that the report has been received and reviewed. For privacy reasons, we may not disclose the specific enforcement action taken against the reported User.

Submitting reports that are knowingly false, frivolous, or made in bad faith (e.g., filing reports against a User to harass them, filing reports to suppress legitimate speech, or filing false intellectual property claims) is itself a violation of this Policy and may result in enforcement action against the reporting User.


We reserve the right to update or modify this Acceptable Use Policy at any time. When we make material changes, we will:

  • Update the “Last Updated” date at the top of this Policy.
  • Post the revised Policy on the Platform.
  • Provide notice to registered Users of material changes via email or in-Platform notification at least fifteen (15) calendar days before the changes take effect, unless the changes are required to comply with applicable law or to address an imminent safety concern, in which case changes may take effect immediately.

Your continued use of the Platform after the effective date of any changes constitutes your acceptance of the revised Policy. If you do not agree to the revised Policy, you must discontinue your use of the Platform.


If you have questions, concerns, or feedback regarding this Acceptable Use Policy, please contact us at:

Mindhyv, LLC

Email: [LEGAL_EMAIL]

Mailing Address: [MAILING_ADDRESS]

For reporting violations, please refer to Section 8 of this Policy.

For appeals, please refer to Section 7 of this Policy.


This Acceptable Use Policy is provided for informational purposes and should be reviewed by qualified legal counsel before publication. This document does not constitute legal advice.