MongoDB CRM Database Design Efficiency -


in crm plan able support scale of 100 businesses now, ideally scale size.

the way it's set right now:

each business has 3 sections of data each section has 1000 "entries" each entry has 30 - 50 "data chunks" - each data chunk has id, entry corresponds to, value indicate type of data is, , value it's holding.

100 * 3 * 1000 * 30 = 9000000 pieces of data.

i'll pulling 100 entries @ given time 3000-5000 or data chunks being pulled, once in while many 1000 entries or more @ once.

i have collections businesses, sections, entries, , data chunks.

i'm setting way because business keeping different kinds of data others , sql database doesn't work that.

a sample generation of data might this:

  • find 1 section name (ie business1 has section called section1)
  • find 100 entries section
  • find 30 data pieces each of entries

that'd result in 101 find calls. find call entries , find call data array has 100 or key/value pairs, pull 100 data entries entry id 1 of 100 or key/value pairs.

is scalable database design? there better way should doing it?

putting of related data in separate collections not choice. i'll remind mongodb not have joins design, you're going have hell of time gathering data chunks using entries , sections collections.

because spec pretty vague (i have no idea section, entry, or data chunk represents), it's hard how design this. me, maybe 2 collections - 1 businesses, 1 data chunks in order. have entry , section ids fields on data chunk documents.


Comments

Popular posts from this blog

javascript - AngularJS custom datepicker directive -

javascript - jQuery date picker - Disable dates after the selection from the first date picker -