{"id":1518,"date":"2016-03-27T15:53:27","date_gmt":"2016-03-27T05:53:27","guid":{"rendered":"http:\/\/brnz.org\/hbr\/?p=1518"},"modified":"2016-03-29T10:47:55","modified_gmt":"2016-03-29T00:47:55","slug":"floats-bits-and-constant-expressions","status":"publish","type":"post","link":"https:\/\/brnz.org\/hbr\/?p=1518","title":{"rendered":"floats, bits, and constant expressions"},"content":{"rendered":"<p>Can you access the bits that represent an IEEE754 single precision float in a C++14 constant expression (constexpr)?<\/p>\n<p>(Why would you want to do that? Maybe you want to run a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fast_inverse_square_root\">fast inverse square root<\/a> at compile time. Or maybe you want to do something that is actually useful. I wanted to know if it could be done.)<\/p>\n<p>For context: this article is based on experiences using gcc-5.3.0 and clang-3.7.1 with -std=c++14 -march=native on a Sandy Bridge Intel i7. Where I reference sections from the C++ standard, I&#8217;m referring to the <a href=\"http:\/\/www.open-std.org\/jtc1\/sc22\/wg21\/docs\/papers\/2014\/n4296.pdf\">November 2014 draft<\/a>.<\/p>\n<p>Before going\u00a0further, I&#8217;ll quote 5.20.6 from the standard:<\/p>\n<blockquote><p>Since this International Standard imposes no restrictions on the accuracy of floating-point operations, it is unspecified whether the evaluation of a floating-point expression during translation yields the same result as the evaluation of the same expression (or the same operations on the same values) during program execution.<sup>88 <\/sup><\/p>\n<p>88) Nonetheless, implementations are encouraged to provide consistent results, irrespective of whether the evaluation was performed during translation and\/or during program execution.<\/p><\/blockquote>\n<p>In this post, I document things that worked (and didn&#8217;t work) for me. You may have a different experience.<\/p>\n<h3>Methods of conversion that won&#8217;t work<\/h3>\n<p>(Error text from g++-5.3.0)<\/p>\n<p>You can&#8217;t access the bits of a float via a typecast pointer [which is undefined behavior, and covered by 5.20.2.5]:<\/p>\n<pre class=\"brush: cpp; highlight: [3]; title: ; notranslate\" title=\"\">  constexpr uint32_t bits_cast(float f)\r\n  {\r\n    return *(uint32_t*)&amp;f; \/\/ [2]\r\n  }\r\n<\/pre>\n<pre class=\"brush: cpp; gutter: false; title: ; notranslate\" title=\"\">error:\r\n  accessing value of 'f' through a 'uint32_t {aka unsigned int}' glvalue\r\n  in a constant expression<\/pre>\n<p>You can&#8217;t convert it via a reinterpret cast [5.20.2.13]<\/p>\n<pre class=\"brush: cpp; highlight: [5]; title: ; notranslate\" title=\"\">constexpr uint32_t bits_reinterpret_cast(float f) \r\n{ \r\n \u00a0const unsigned char* cf = reinterpret_cast&lt;const unsigned char*&gt;(&amp;f); \r\n  \/\/ endianness notwithstanding\r\n \u00a0return (cf[3] &lt;&lt; 24) | (cf[2] &lt;&lt; 16) | (cf[1] &lt;&lt; 8) | cf[0]; \r\n} \r\n<\/pre>\n<pre class=\"brush: cpp; gutter: false; title: ; notranslate\" title=\"\">error:\r\n  '*(cf + 3u)' is not a constant expression\r\n<\/pre>\n<p>(gcc reports an error with the memory access, but does not object to the <code>reinterpret_cast<\/code>. clang produces a specific error for the cast.)<\/p>\n<p>You can&#8217;t convert it through a union [gcc, for example,\u00a0<a href=\"https:\/\/gcc.gnu.org\/bugs\/#nonbugs\">permits this for non-constant expressions<\/a>, but the standard forbids it in 5.20.2.8]:<\/p>\n<pre class=\"brush: cpp; highlight: [8]; title: ; notranslate\" title=\"\">constexpr uint32_t bits_union(float f) \r\n{ \r\n  union Convert { \r\n    uint32_t u;\r\n    float f;\r\n    constexpr Convert(float f_) : f(f_) {}\r\n  };\r\n  return Convert(f).u;\r\n}\r\n<\/pre>\n<pre class=\"brush: cpp; gutter: false; title: ; notranslate\" title=\"\">error:\r\n  accessing 'bits_union(float)::Convert::u' member instead of \r\n  initialized 'bits_union(float)::Convert::f' member \r\n  in constant expression\r\n<\/pre>\n<p>You can&#8217;t use <code>memcpy()<\/code> [5.20.2.2]:<\/p>\n<pre class=\"brush: cpp; highlight: [4]; title: ; notranslate\" title=\"\">constexpr uint32_t bits_memcpy(float f) \r\n{\r\n  uint32_t u = 0;\r\n  memcpy(&amp;u, &amp;f, sizeof f);\r\n  return u;\r\n}\r\n<\/pre>\n<pre class=\"brush: cpp; gutter: false; title: ; notranslate\" title=\"\">error:\r\n  'memcpy(((void*)(&amp;u)), ((const void*)(&amp;f)), 4ul)' \r\n  is not a constant expression\r\n<\/pre>\n<p>And you can&#8217;t define a constexpr <code>memcpy()<\/code>-like function that is capable of the task [5.20.2.11]:<\/p>\n<pre class=\"brush: cpp; highlight: [6]; title: ; notranslate\" title=\"\">constexpr void* memcpy(void* dest, const void* src, size_t n)\r\n{\r\n  char* d = (char*)dest;\r\n  const char* s = (const char*)src;\r\n  while(n-- &gt; 0)\r\n    *d++ = *s++;\r\n  return dest;\r\n}\r\n\r\nconstexpr uint32_t bits_memcpy(float f)\r\n{\r\n  uint32_t u = 0;\r\n  memcpy(&amp;u, &amp;f, sizeof f);\r\n  return u;\r\n}\r\n<\/pre>\n<pre class=\"brush: cpp; gutter: false; title: ; notranslate\" title=\"\">error:\r\n  accessing value of 'u' through a 'char' glvalue\r\n  in a constant expression\r\n<\/pre>\n<p>So what can you do?<\/p>\n<h3>Floating point operations in constant expressions<\/h3>\n<p>For\u00a0<code>constexpr float f = 2.0f, g = 2.0f<\/code>\u00a0the following operations are available [as they are not ruled out\u00a0by anything I can see in 5.20]:<\/p>\n<ul>\n<li>Comparison of floating point values e.g.<br \/>\n<code>static_assert(f == g, \"not equal\");<\/code><\/li>\n<li>Floating point arithmetic operations e.g.<br \/>\n<code>static_assert(f * 2.0f == 4.0f, \"arithmetic failed\");<\/code><\/li>\n<li>Casts from float to integral value, often with well-defined semantics e.g.<br \/>\n<code>constexpr int i = (int)2.0f; static_assert(i == 2, \"conversion failed\");<\/code><\/li>\n<\/ul>\n<p>So I wrote a function (<code>uint32_t bits(float)<\/code>) that will return the binary representation of an IEEE754 single precision float. The full function is at the end of this post. I&#8217;ll go through the various steps required to produce (my best approximation of) the desired result.<\/p>\n<h3>1. Zero<\/h3>\n<p>When <code>bits()<\/code> is passed the value zero, we want this behavior:<\/p>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">static_assert(bits(0.0f) == 0x00000000);<\/pre>\n<p>And we can have it:<\/p>\n<pre class=\"brush: cpp; first-line: 24; title: ; notranslate\" title=\"\">  if (f == 0.0f)\r\n    return 0;\r\n<\/pre>\n<p>Nothing difficult about that.<\/p>\n<h3>2. Negative zero<\/h3>\n<p>In IEEE754 land, negative zero is a thing. Ideally, we&#8217;d like\u00a0this behavior:<\/p>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">static_assert(bits(-0.0f) == 0x80000000)<\/pre>\n<p>But the check for zero also matches negative zero.\u00a0Negative zero is not something that the C++ standard has anything to say about, given that IEEE754 is\u00a0an implementation choice [3.9.1.8: &#8220;The value representation of floating-point types is implementation defined&#8221;]. My compilers treat negative zero the same as zero for all comparisons and arithmetic operations. As such,\u00a0<code>bits()<\/code>\u00a0returns the wrong value when considering negative zero, returning <code>0x00000000<\/code> rather than the desired\u00a0<code>0x80000000<\/code>.<\/p>\n<p>I did look into other methods\u00a0for detecting negative zero in C++, without finding something that would work in a constant expression. I have seen divide by zero used as a way to detect negative zero (resulting in\u00a0\u00b1infinity, depending on the sign of the zero), but that doesn&#8217;t compile in a constant expression:<\/p>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">constexpr float r = 1.0f \/ -0.0f;<\/pre>\n<pre class=\"brush: cpp; gutter: false; title: ; notranslate\" title=\"\">error: '(1.0e+0f \/ -0.0f)' is not a constant expression\r\n<\/pre>\n<p>and divide by zero is explicitly named as undefined behavior in 5.6.4, and so by 5.20.2.5 is unusable in a constant expression.<\/p>\n<p>Result: negative zero becomes positive zero.<\/p>\n<h3>3. Infinity<\/h3>\n<p>We want this:<\/p>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">static_assert(bits(INFINITY) == 0x7f800000);<\/pre>\n<p>And this:<\/p>\n<pre class=\"brush: cpp; first-line: 26; title: ; notranslate\" title=\"\">  else if (f == INFINITY)\r\n    return 0x7f800000;\r\n<\/pre>\n<p>works as expected.<\/p>\n<h3>4. Negative Infinity<\/h3>\n<p>Same idea, different sign:<\/p>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">static_assert(bits(-INFINITY) == 0xff800000);<\/pre>\n<pre class=\"brush: cpp; first-line: 28; title: ; notranslate\" title=\"\">  else if (f == -INFINITY)\r\n    return 0xff800000;\r\n<\/pre>\n<p>Also works.<\/p>\n<h3>5. NaNs<\/h3>\n<p>There&#8217;s no way to generate arbitrary NaN constants in a constant expression that I can see (not least because casting bits to floats isn&#8217;t possible in a constant expression, either), so it seems impossible to get this right in general.<\/p>\n<p>In practice, maybe this is good enough:<\/p>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">static_assert(bits(NAN) == 0x7fc00000);<\/pre>\n<p>NaN values can be anywhere in the range of <code>0x7f800001 -- 0x7fffffff<\/code> and <code>0xff800001 -- 0xffffffff<\/code>. I have no idea as to the specific values that are seen in practice, nor what they mean. <code>0x7fc00000<\/code> shows up in <code>\/usr\/include\/bits\/nan.h<\/code> on the system I&#8217;m using to write this, so &#8212; right or wrong &#8212; I&#8217;ve chosen that as the reference value.<\/p>\n<p>It is possible to detect a NaN value in a constant expression, but not its payload. (At least that I&#8217;ve been able to find). So there&#8217;s this:<\/p>\n<pre class=\"brush: cpp; first-line: 30; title: ; notranslate\" title=\"\">  else if (f != f) \/\/ NaN\r\n    return 0x7fc00000; \/\/ This is my NaN...\r\n<\/pre>\n<p>Which means that of the 2*(2<sup>23<\/sup>-1) possible NaNs, one will be handled correctly (in this case, <code>0x7fc00000<\/code>). For the other 16,777,213 values, the wrong value will be returned (in this case, <code>0x7fc00000<\/code>).<\/p>\n<p>So&#8230; partial success?\u00a0NaNs are correctly detected, but the bits for only one NaN value will be returned correctly.<\/p>\n<p>(On the other hand, the probability that it will ever matter could be stored\u00a0as a denormalized float)<\/p>\n<h3>6. Normalized Values<\/h3>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">\/\/ pseudo-code\r\nstatic_assert(bits({  0x1p-126f, ...,  0x1.ffff7p127})\r\n                == { 0x00800000, ...,  0x7f7fffff});\r\nstatic_assert(bits({ -0x1p-126f, ..., -0x1.ffff7p127})\r\n                == { 0x80800000, ...,  0xff7fffff});\r\n<\/pre>\n<p>[<a href=\"http:\/\/en.cppreference.com\/w\/cpp\/language\/floating_literal\">That <code>0x1p<em>nnn<\/em>f<\/code> format<\/a>\u00a0happens to be a convenient way to represent exact values that can be stored as binary floating point numbers]<\/p>\n<p>It is possible to detect and correctly construct bits for every normalized value. It does requires a little care to avoid truncation and undefined behavior. I wrote a few different implementations &#8212; the one that I describe here requires relatively little code, and doesn&#8217;t perform terribly [0].<\/p>\n<p>The first step is to find and clear the sign bit. This simplifies subsequent steps.<\/p>\n<pre class=\"brush: cpp; first-line: 33; title: ; notranslate\" title=\"\"> bool sign = f &lt; 0.0f; \r\n  float abs_f = sign ? -f : f;\r\n<\/pre>\n<p>Now we have <code>abs_f<\/code> &#8212; it&#8217;s positive, non-zero, non-infinite, and not a NaN.<\/p>\n<p>What happens when a float\u00a0is cast\u00a0to an integral type?<\/p>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">uint64_t i = (uint64_t)f;\r\n<\/pre>\n<p>The value of <code>f<\/code> will be stored in <code>i<\/code>, according to the following rules:<\/p>\n<ul>\n<li>The value will be rounded towards zero which, for positive values,\u00a0means truncation of any fractional part.<\/li>\n<li>If the value in <code>f<\/code> is too large to be represented as a <code>uint64_t<\/code>\u00a0(i.e. <code>f<\/code> &gt; 2<sup>64<\/sup>-1) the result is undefined.<\/li>\n<\/ul>\n<p>If truncation takes place, data is lost.\u00a0If the number is too large, the result is (probably) meaningless.<\/p>\n<p>For our conversion function, if we can scale <code>abs_f<\/code>\u00a0into a range where it is not larger than (2<sup>64<\/sup>-1), and it has no fractional part, we have access to an exact representation of the bits that make up the float. We just need to keep track of the amount of scaling being done.<\/p>\n<p>Single precision IEEE 754 floating point numbers have, at most, (23+1) bits of precision (23 in the significand, 1 implicit).\u00a0This means that we can scale down large numbers and scale up small numbers into the required range.<\/p>\n<p>Multiplying by powers of two change only the exponent of the float, and leave the significand unmodified. As such, we can arbitrarily scale a float by a power of two and &#8212; so long as we don&#8217;t over- or under-flow the float &#8212; we will not lose any of the bits in the significand.<\/p>\n<p>For the sake of simplicity (believe it or not [1]), my approach is to scale <code>abs_f<\/code>\u00a0in steps of 2<sup>41<\/sup> so that (<code>abs_f<\/code>\u00a0\u2265 2<sup>87<\/sup>) like so:<\/p>\n<pre class=\"brush: cpp; first-line: 36; title: ; notranslate\" title=\"\">  int exponent = 254; \r\n\r\n  while(abs_f &lt; 0x1p87f) \r\n  { \r\n    abs_f *= 0x1p41f; \r\n    exponent -= 41; \r\n  }\r\n<\/pre>\n<p>If <code>abs_f<\/code> \u2265\u00a02<sup>87<\/sup>, the least significant bit of <code>abs_f<\/code>, if set, is 2<sup>(87-23)<\/sup>==2<sup>64.<\/sup><\/p>\n<p>Next, <code>abs_f<\/code> is scaled back down by 2<sup>64<\/sup>\u00a0(which adds no fractional part as the least significant bit is 2<sup>64<\/sup>) and converted to an unsigned 64 bit integer.<\/p>\n<pre class=\"brush: cpp; first-line: 44; title: ; notranslate\" title=\"\">  uint64_t a = (uint64_t)(abs_f * 0x1p-64f);\r\n<\/pre>\n<p>All of the bits of <code>abs_f<\/code> are now present in <code>a<\/code>, without overflow or truncation. All that is needed now is to determine where they are:<\/p>\n<pre class=\"brush: cpp; first-line: 45; title: ; notranslate\" title=\"\">  int lz = count_leading_zeroes(a);\r\n<\/pre>\n<p>adjust the exponent accordingly:<\/p>\n<pre class=\"brush: cpp; first-line: 46; title: ; notranslate\" title=\"\">  exponent -= lz;\r\n<\/pre>\n<p>and construct the result:<\/p>\n<pre class=\"brush: cpp; first-line: 54; title: ; notranslate\" title=\"\">  uint32_t significand = (a &lt;&lt; (lz + 1)) &gt;&gt; (64 - 23); \/\/ [3]\r\n  return (sign &lt;&lt; 31) | (exponent &lt;&lt; 23) | significand;\r\n<\/pre>\n<p>With this, we have correct results for every normalized float.<\/p>\n<h3>7. Denormalized Values<\/h3>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">\/\/ pseudo-code\r\nstatic_assert(bits({  0x1.0p-149f, ...,  0x1.ffff7p-127f})\r\n                == {  0x00000001,  ...,  0x007fffff});\r\nstatic_assert(bits({ -0x1.0p-149f, ..., -0x1.ffff7p-127f})\r\n                == {  0x80000001,  ...,  0x807fffff});\r\n<\/pre>\n<p>The final detail is denormalized values. Handling of normalized values as presented so far fails because denormals will have additional leading zeroes. They are fairly easy to account for:<\/p>\n<pre class=\"brush: cpp; first-line: 48; title: ; notranslate\" title=\"\">   if (exponent &lt;= 0) \r\n  { \r\n    exponent = 0; \r\n    lz = 8 - 1; \r\n  }\r\n<\/pre>\n<p>To attempt to demystify that <code>lz = 8 - 1<\/code>\u00a0a little: there are 8 leading bits that aren&#8217;t part of the significand of a denormalized single precision float after the repeated 2<sup>-41<\/sup> scaling that has taken place. There is also no leading 1 bit that is present in all normalized numbers (which is accounted for in the calculation of <code>significand<\/code> above as <code>(lz + 1)<\/code>). So the leading zero count (<code>lz<\/code>) is set to account for the 8 bits of offset to the start of the denormalized significand, minus the one that the subsequent calculation assumes it needs to skip over.<\/p>\n<p>And that&#8217;s it. All the possible values of a float are accounted for.<\/p>\n<p>(Side note: If you&#8217;re compiling with -ffast-math, passing denormalized numbers to <code>bits()<\/code> will return invalid results. That&#8217;s -ffast-math for you. With gcc or clang, you could add an <code>#ifdef __FAST_MATH__<\/code> around the test for negative exponent.)<\/p>\n<h3>Conclusion<\/h3>\n<p>You can indeed obtain the bit representation of a floating point number at compile time. Mostly. Negative zero is wrong, NaNs are detected but otherwise not accurately converted.<\/p>\n<p>Enjoy your compile-time bit-twiddling!<\/p>\n<hr \/>\n<p>The whole deal:<\/p>\n<pre class=\"brush: cpp; title: ; notranslate\" title=\"\">\/\/ Based on code from \r\n\/\/ https:\/\/graphics.stanford.edu\/~seander\/bithacks.html\r\nconstexpr int count_leading_zeroes(uint64_t v) \r\n{ \r\n  constexpr char bit_position[64] = {  \r\n     0,  1,  2,  7,  3, 13,  8, 19,  4, 25, 14, 28,  9, 34, 20, 40, \r\n     5, 17, 26, 38, 15, 46, 29, 48, 10, 31, 35, 54, 21, 50, 41, 57, \r\n    63,  6, 12, 18, 24, 27, 33, 39, 16, 37, 45, 47, 30, 53, 49, 56, \r\n    62, 11, 23, 32, 36, 44, 52, 55, 61, 22, 43, 51, 60, 42, 59, 58 }; \r\n   \r\n  v |= v &gt;&gt; 1; \/\/ first round down to one less than a power of 2  \r\n  v |= v &gt;&gt; 2; \r\n  v |= v &gt;&gt; 4; \r\n  v |= v &gt;&gt; 8; \r\n  v |= v &gt;&gt; 16; \r\n  v |= v &gt;&gt; 32; \r\n  v = (v &gt;&gt; 1) + 1; \r\n   \r\n  return 63 - bit_position[(v * 0x0218a392cd3d5dbf)&gt;&gt;58]; \/\/ [3]\r\n}\r\n \r\nconstexpr uint32_t bits(float f) \r\n{ \r\n  if (f == 0.0f) \r\n    return 0; \/\/ also matches -0.0f and gives wrong result \r\n  else if (f == INFINITY) \r\n    return 0x7f800000; \r\n  else if (f == -INFINITY) \r\n    return 0xff800000; \r\n  else if (f != f) \/\/ NaN \r\n    return 0x7fc00000; \/\/ This is my NaN...\r\n \r\n  bool sign = f &lt; 0.0f; \r\n  float abs_f = sign ? -f : f; \r\n \r\n  int exponent = 254; \r\n \r\n  while(abs_f &lt; 0x1p87f) \r\n  { \r\n    abs_f *= 0x1p41f; \r\n    exponent -= 41; \r\n  } \r\n \r\n  uint64_t a = (uint64_t)(abs_f * 0x1p-64f); \r\n  int lz = count_leading_zeroes(a);\r\n  exponent -= lz;\r\n \r\n  if (exponent &lt;= 0) \r\n  { \r\n    exponent = 0; \r\n    lz = 8 - 1;\r\n  } \r\n \r\n  uint32_t significand = (a &lt;&lt; (lz + 1)) &gt;&gt; (64 - 23); \/\/ [3]\r\n  return (sign &lt;&lt; 31) | (exponent &lt;&lt; 23) | significand; \r\n}\r\n<\/pre>\n<p>[0] Why does runtime performance matter? Because that&#8217;s how I tested the conversion function while implementing it. I was applying\u00a0<a href=\"https:\/\/randomascii.wordpress.com\/2014\/01\/27\/theres-only-four-billion-floatsso-test-them-all\/\">Bruce Dawson&#8217;s advice for testing floats<\/a>\u00a0and the quicker I found out that I&#8217;d broken the conversion the better. For the implementation described in this post, it takes about 97 seconds to test all four billion float values on my laptop &#8212; half that time if I wasn&#8217;t testing negative numbers (which are unlikely to cause problems due to the way I handle the sign bit). The implementation I&#8217;ve described in this post is not the fastest solution to the problem, but it is relatively compact, and well behaved in the face of <code>-ffast-math<\/code>.<\/p>\n<p>Admission buried in a footnote: I have not validated correct behavior of this code for every floating point number in actual compile-time constant expressions. Compile-time evaluation of four billion invocations of <code>bits()<\/code>\u00a0takes more time than I&#8217;ve been willing to invest so far.<\/p>\n<p>[1] It is conceptually simpler to multiply <code>abs_f<\/code> by two (or one half) until the result is exactly positioned so that no leading zero count is required after the cast &#8212; at least, that was what I did in my first attempt. The approach described here was found to be significantly faster. I have no doubt that better-performing constant-expression-friendly approaches\u00a0exist.<\/p>\n<p>[2] Update 2016-03-28: Thanks to <a href=\"https:\/\/news.ycombinator.com\/item?id=11373216\">satbyy<\/a>\u00a0for pointing out the missing ampersand &#8212; it was lost sometime after copying the code into the article.<\/p>\n<p>[3] Update 2016-03-28: Thanks to <a href=\"https:\/\/www.reddit.com\/r\/cpp\/comments\/4c9753\/floats_bits_and_constant_expressions\/d1h1ttv\">louiswins<\/a> for pointing out additional code errors.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Can you access the bits that represent an IEEE754 single precision float in a C++14 constant expression (constexpr)? (Why would you want to do that? Maybe you want to run a fast inverse square root at compile time. Or maybe you want to do something that is actually useful. I wanted to know if it &hellip; <a href=\"https:\/\/brnz.org\/hbr\/?p=1518\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;floats, bits, and constant expressions&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[5,26],"tags":[],"_links":{"self":[{"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=\/wp\/v2\/posts\/1518"}],"collection":[{"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1518"}],"version-history":[{"count":172,"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=\/wp\/v2\/posts\/1518\/revisions"}],"predecessor-version":[{"id":1691,"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=\/wp\/v2\/posts\/1518\/revisions\/1691"}],"wp:attachment":[{"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1518"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1518"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/brnz.org\/hbr\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1518"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}