• 1 Post
  • 4 Comments
Joined 1 year ago
cake
Cake day: October 27th, 2023

help-circle

  • That’s a good question. In such a situation, there is probably not a lot you can do except tell her she absolutely must/should hire a CTO.

    Additionally, make sure you have an “insurance” in the form of a paper trail: Whenever she says to come up with feature XYZ, provide an estimate how long it takes in written form. Later on she cannot blame you guys for not having told her, you actually have proof. If you want, you could also do that as part of a an architecture decision document. That’s unusual as the document is not meant for that, but together with the architecture decisions you have a captured each and every rationale why you think building XYZ actually takes so and so long. When I manage projects with a moderate to high degree of complexity I typically add 20% - 40% overhead time to my high-level estimates for the “unknown unknowns”. I know that stakeholders typically don’t like that, but we all know that projects follow the 80/20 principle.



  • fabkostaBtoStartups@indiehackers.spaceSo I created a SaaS...
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How does EverMail AI ensure responsible AI use?

    At EverMail AI, we leverage cutting-edge Large Language Models (LLMs) to craft fully customized emails using natural language. We prioritize ethical AI development, ensuring fairness and data privacy, while eliminating the need for fill-in-the-blank templates.

    Aha. So much for answering the question on transparency and responsibility.

    So, in essence you do webscraping on their personal LinkedIn profile and use an LLM to create a custom email.

    What if I, as the recipient of the email, do not want you to scrape my data in the first place? Is there a way to object?


  • I have been wondering about this too. Never saw a single situation where causal modeling was practically applied.

    I am thinking whether potentially there is a fundamental flaw here.

    If you don’t know what causes what but just observe the correlation then most likely you will never find out what is the underlying cause because the entire causation is so complicated or hard to find or lacking further data that it’s just not feasible to figure it out.

    If you know what causes what you can build your model accordingly. But in such a situation, the causality is usually not “perfect”. For example: If A causes B every single time when B occurs, then B does not truly give you any further information at all, because apparently all information must be contained in A. If, however, B does occur only with a certain probability when A occurs then you again are back to not knowing exactly how causality works, and there are unkonwn factors you cannot account for.

    Don’t know, just some thoughts.